0% found this document useful (0 votes)
16 views24 pages

Assignment2 Sem2BIT SecE Dipesh 1726217524168

Uploaded by

Dipesh Thakur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views24 pages

Assignment2 Sem2BIT SecE Dipesh 1726217524168

Uploaded by

Dipesh Thakur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Texas College of Management and IT

Advanced Computer Networking


Explore, Learn and Excel
Mid-Term Project

Submitted By: Dipesh Thakur


Submitted To: IT Department
LCID: LC00017002650
Section: E
Program: BIT
Table of Content
S.N. Topics Page Number
1 Question 1 3
2 Question 2 4
3 Question 3 4
4 Question 4 4-5
5 Question 5 5-6
6 Question 6 6-7
7 Question 7 7-8
8 Question 8 8
9 Question 9 8-9
10 Question 10 9-11
11 Question 11 11-14
12 Question 12 14-16
13 Question 13 16-17
14 Question 14 18-20
15 Question 15 20-22
16 Question 16 22-24
17 Question 17 24
1. Explain how file allocation table (FAT) manages the files. Mention the merits and demerits of
FAT system. A 200 GB disk has 1 KB block sizes. Calculate the size of the file allocation table if
each entry of the table to be 3 bytes.

Answer: File Allocation Table (FAT) is a file system used by OS to manage and organize files on a
storage device such as hard drives. It keeps track of the location of each file on the device by using a
table that maps file names to their physical location on the disk. OS consults the FAT to find the
physical location of a file on the disk and determine where to store new files.

File Allocation Table (FAT) manages the files by:

a. When a new file is created, the OS allocates one or more clusters to the file and updates the
corresponding entries in the FAT to indicate that these clusters are now in use. The first entry in
the FAT is reserved for the root directory of the disk.
b. To access a file, the OS uses FAT to find the first clusters of the file and then follows the chain of
clusters that make up the file using the pointers in the FAT entries to locate each subsequent
cluster.
c. If a file is detected, FAT simply marks the clusters as available without erasing the data.

The merits of File Allocation Table (FAT) are:

a. FAT is easy to implement and use, most suitable for smaller storage devices.
b. It can support partitions size up to 8 TB.
c. FAT uses entire block of disk for its data.
d. It is compatible for most OS, including MacOS, Linux etc.
e. The file system of FAT32 allows for easy conversion routes.
f. FAT also provides random access.

Demerits of File Allocation Table (FAT) are:

a. Data security is minimal.


b. FAT does not support folder encryption or fault tolerance.
c. There is no support for data recovery in case of loss.
d. FAT struggles with efficient storage management on large disks due to its large cluster size.

To calculate the size of File Allocation Table (FAT), we need to determine the number of entries
required for the table and then calculate the total size based on the size of each entry.

The total disk size is 200 GB, and the block size is 1 KB (1024 bytes).

First, calculate the number of blocks on the disk:

Number of blocks = Disk Size/Block Size = 200 * 10^9 bytes/1 * 10^3 bytes block = 200 * 10^6 blocks

Each entry in the FAT is 3 bytes. The total size of the FAT is the number of blocks multiplied by the
size of each entry.

FAT size = Number of blocks * Entry size = 200 * 10^6 * 3 bytes = 600 * 10^6 bytes

The size of the file allocation table is 600 MB.


2. Suppose that a disk has 100 cylinders, numbered 0 to 99. The drive is currently serving a request
at cylinder 43, and previous request was at cylinder 25. The queue of pending request in FIFO
order is: 86, 70, 13, 74, 48, 9, 22, 50, 30 starting from the current head position. What is the
total distance (in cylinders that the disk arms move to satisfy all pending request for each of the
following disk scheduling algorithms?

Answer:
3. A system has two process and 3 resources. Each process needs a maximum of two resources. Is
deadlock possible? Explain with answer.

Answer: To determine whether deadlock is possible, the following four conditions must hold
simultaneously:

1. Mutul Exclusion

2. Hold and Wait

3. No Preemption

4. Circular Wait

If process P1 acquires 2 resources, and process P2 tries to acquire 2 resources, there would not be
enough resources for P2 to acquire both. However, as there are 3 resources in total, deadlock can
be avoided. For example: one process can acquire 2 resources, finist its task, release the resources,
and then the second process can acquire the resources it needs.

With 2 processes and 3 resources, it is possible to allocate resources in a way that prevents circular
wait. Even if one process is holding 2 resources, the second process can be forced to wait until the
resources are freed and the deadlock will not occur in this situation because the system has enough
resources (3 resources for 2 processes to prevent all four conditions for deadlock from holding true
at the same time. The maximum each process can need is 2 resources, one process will always be
able to finish and release resources.

4. Discuss about single level and two-level directory system. Consider the following process and
answer the following questions:
a. What is the content of matrix need?
b. Is the system in safe state?
c. If P1 request (0,4,2,0), can the request be granted immediately?

Answer:
5. When does race condition occur in inter process communication? What does busy waiting mean
and how it can be handled using sleep and wakeup strategy?

Answer: A race condition occurs when multiple processes or threads access shared data or
resources concurrently, and the final outcome depends on the timing or order of execution of these
processes. If one process modifies the shared data while another process is reading or writing to it
at the same time, inconsistent or erroneous results can occur. Race conditions typically occur when
two or more processes or threads are accessing shared variables or resources. The access to shared
resources is not properly synchronized, mechanisms are not in places to control access. The process
or threads are running concurrently (in parallel or in aa interleaved manner), and the outcome
depends on the order or timing of their execution.

Busy waiting occurs when a process continuously checks for a condition (e.g., lock availability)
without releasing the CPU, wasting CPU cycles and delaying other processes.

Problems

- CPU wastage: Process consumes CPU without making progress.


- Inefficiency: Other processes are delayed
- Performance Issues: In multi-threaded systems, busy waiting can degrade performance

Instead of busy waiting, process sleeps when it can’t proceed, releasing the CPU. It is woken up
when the resource becomes available, preventing CPU wastage and improving synchronization. The
lost wakeup problem occurs if a wakeup signal is sent before process sleeps. Solution like
semaphores or condition variables ensure proper synchronization.

Busy waiting wastes CPU time, while the sleep and wakeup strategy efficiently synchronize
processes without CPU wastage.

6. Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2. Find the number of page fault


using OPR, LRU, Second Chance, FIFO with 4-page frame.

Answer:
7. What is meant by file attributes? Discuss any one technique of implementing directories in
detail.

Answer: File attributes are the meta data associated with a file that provides important information
about the file, such as its properties and how the operating system should manage and handle it.
These attributes help the OS and applications to perform various operations like reading, writing,
executing, and managing access control. The typical file attributes include:

a. Name: The human readable name of the file.


b. Type: The type of file (e.g. text file, binary file, image).
c. Location: The physical location of the file on storage (such as disk address).
d. Size: The size of the file in bytes.
e. Protection: Access control information specifying who can read, write or execute the file.

Directory implementation techniques:

A directory is a structure used to store and organize files. The directory system maintains a list of
files and provides access to them using their names. There are several methods to implement
directories. One common technique is the two-level directory structure.

In two level directory structure, each user has their own separate directory. This creates two levels:
the root directory and the user-level directory. It is a simple extension of the single level directory,
where each user is allocated their own space to store files, thus eliminating name conflicts between
users.

8. When does page fault occur? Give a structure of page table. Why do we need virtual memory?

Answer: A page fault occurs when a process tries to access a page (a fixed-length block of memory)
that is not currently in the main memory (RAM). The OS must intervene to retrieve the missing page
from secondary storage (usually the hard disk or SSD) and load it into memory.

A page fault happens in the following scenario:

- The page is not yet loaded into memory.


- The page was swapped out to the disk due to memory pressure and needs to be brought back
into RAM.
- The page does not exist, and a new page needs to be allocated (e.g. for heap or stack growth).

A page table is a data structure used by the operating system to map virtual address to physical
addresses. Each process has its own page table, which stores the mapping for that process’s virtual
address space.

Virtual Page Frame Valid Bit Dirty Bit Access Usage/other


Number Number Control info
0 12 1 0 Read/write Used
1 19 1 1 Read-only modified
2 25 0 0 No access Not used
3 31 1 0 Read/write used
- - - - - -

Virtual Memory is an abstraction layer between the physical memory (RAM) and processes. It
provides several advantages, which are critical for modern operating system and application
performance. Virtual memory allows processes to use more memory than physically available on the
system. It transparently manages the swapping of pages between RAM and disk, providing an
illusion of a large continuous memory space. It provides protection between processes. Each process
has its own virtual address space, ensuring that one process can not interfere with another by
directly accessing its memory. With virtual memory, only the pages required by a program at any
given time are loaded into memory.

9. Can deadlock occur in case of preemptive resources? List the conditions for deadlock. Define
allocation graph with example.

Answer: Yes, deadlock can occur in case of preemptive resources. Deadlock is a situation in which a
set of processes is blocked because each process is holding a resource and waiting for another
resource that is being held by some other process in the set.

Conditions for deadlock


- Mutual Exclusion: At least one resource must be held in a non-shareable mode. Only one
process can use the resource at any given time.
- Hold and Wait: A process holding at least one resource is waiting to acquire additional resources
that are currently being held by other process.
- No preemption: Resources cannot be forcibly removed from a process holding them. They must
be released voluntarily by the process after it completes its task.
- Circular wait: A set of processes must exist such that each process is waiting for a resource that
is held by the next process in the set, forming circular chain.

A resource allocation graph (RAG) is a graphical representation used to determine the allocation of
resources to processes and show possible deadlocks. It consists of:

- Processes represented as circles or nodes.


- Resources represented as squares.
- Edges

A directed edge from a process to a resource (e.g. P1 – R1) signifies that the process is requesting
the resources.

A directed edge from a resource to a process (e.g. R1 – P1) signifies that the resource has been
allocated to the process.

10. Discuss about contiguous and linked list file allocation technique.

Answer:

Contiguous Allocation

In contiguous allocation, each file is stored in a single, contiguous block of disk space. The file
occupies a set of contiguous blocks or sectors on the disk.

When a file is created, the operating system allocates a contiguous set of blocks to the file.The
starting block and the number of blocks allocated are stored in the file's metadata (e.g., file control
block).
Advantages:

- Efficient Read/Write: Since the file is stored in contiguous blocks, reading and writing are fast as
the disk head doesn't need to move much.

- Simple Implementation: Allocation and deallocation of files are straightforward.

Disadvantages:

- Disk Fragmentation: Over time, as files are created and deleted, free space on the disk can become
fragmented. This fragmentation can lead to situations where there's not enough contiguous space
to allocate a new file, even though there may be sufficient total free space.

- Wasted Space: If the allocated space exceeds the file's size, the extra space may be wasted.

Use Cases:

- Contiguous allocation is often used in systems where the file size is known and doesn't change
frequently, such as in certain embedded systems or early file systems.

Linked List Allocation

In linked list allocation, each file is stored as a linked list of disk blocks. Each block contains a pointer
to the next block in the file.

When a file is created, the operating system allocates blocks to the file as needed and maintains a
list of these blocks. Each block contains a pointer to the next block in the sequence, which allows the
file to be stored non-contiguously.

Advantages:

- No External Fragmentation: Because the file blocks can be scattered around the disk, there’s no
problem with finding a contiguous block of free space.

- Efficient Space Utilization: Space is used more efficiently, especially for files that grow dynamically.

Disadvantages:

- Performance Overhead: Accessing a file requires following a chain of pointers, which can be slower
compared to contiguous allocation, especially if the file is fragmented across many blocks.

- Pointer Overhead: Each block requires additional space for storing pointers, which reduces the
amount of space available for actual data.

Use Cases:

- Linked list allocation is often used in file systems that support dynamic file growth and are
designed to handle large files, such as modern general-purpose file systems (e.g., ext4, NTFS).
- Contiguous Allocation is efficient for access but suffers from fragmentation and wasted space. It's
simpler but less flexible.

- Linked List Allocation avoids fragmentation and utilizes space more effectively but can incur
performance overhead due to pointer chasing.

Both techniques have their own advantages and are chosen based on the specific requirements of
the file system and the nature of the data being managed.

11. How lock variable is used in achieving mutual exclusion? Describe.

Answer: Lock variables are fundamental to achieving mutual exclusion in concurrent programming.
Mutual exclusion ensures that only one process or thread can access a critical section of code at a time,
preventing race conditions and ensuring data consistency.

Lock Variables

A lock variable is a simple synchronization primitive used to control access to a critical section. It
typically represents a binary state—locked or unlocked—allowing processes or threads to acquire or
release the lock.

How Lock Variables Achieve Mutual Exclusion

1. Initialization:

- A lock variable is initialized to an unlocked state, which usually is represented by a value such as `0`
or `false`.

2. Locking Mechanism:

- When a process or thread wants to enter a critical section, it attempts to acquire the lock. It does so
by setting the lock variable to a locked state (e.g., `1` or `true`).

- The process or thread checks the lock variable:

- If the lock variable is `unlocked`, the process sets it to `locked` and enters the critical section.

- If the lock variable is `locked`, the process waits (or retries) until it becomes `unlocked`.

3. Critical Section Execution:

- Once a process successfully acquires the lock, it can safely execute the critical section of code,
knowing that no other process or thread can enter the critical section simultaneously.

4. Releasing the Lock:

- After completing the critical section, the process releases the lock by setting the lock variable back to
the `unlocked` state (e.g., `0` or `false`).

- This change allows other waiting processes or threads to acquire the lock and enter the critical
section.
Implementation Example: Simple Lock Variable

// Global lock variable

volatile int lock = 0; // 0 = unlocked, 1 = locked

void enter_critical_section() {

while (true) {

// Attempt to acquire the lock

if (lock == 0) {

// Try to set the lock to locked

lock = 1;

// Check if the lock was successfully acquired

if (lock == 1) {

// Successfully acquired the lock

break;

// Otherwise, the lock was acquired by another process; try again

// Optionally, add a delay to prevent busy-waiting

void exit_critical_section() {

// Release the lock

lock = 0;

Considerations and Improvements


1. Busy Waiting:

- The above implementation uses busy waiting, where the process repeatedly checks the lock variable.
This can be inefficient and waste CPU resources.

2. Spinlocks:

- A more sophisticated implementation might use spinlocks, which are a type of lock variable where
processes spin in a loop while waiting for the lock to be released. Spinlocks are often used in low-level
programming and in scenarios where the wait time is expected to be short.

3. Mutexes and Semaphores:

- For more complex synchronization needs, higher-level constructs such as mutexes (mutual exclusion
objects) and semaphores can be used. They handle various aspects of synchronization and often provide
additional features like timeouts and priority management.

4. Fairness:

- Simple lock variables do not inherently provide fairness. To ensure that all processes or threads get a
chance to access the critical section, additional mechanisms (such as ticket-based locks or queue-based
locks) can be implemented.

Lock variables provide a fundamental way to achieve mutual exclusion by allowing only one process or
thread to enter a critical section at a time. While simple and effective, they can be enhanced with
additional mechanisms to address efficiency, fairness, and resource management concerns in more
complex systems.

12. Why do we need hierarchical directory system? Explain structure of disk.

Answer: A hierarchical directory system is essential for efficiently organizing and managing files in a file
system. It provides several benefits, such as ease of navigation, organization, and scalability. Here’s a
detailed explanation of why a hierarchical directory system is needed and the structure of a disk in the
context of file systems.

There are many reasons why we need hierarchical directory system. They are as follows:

1. Organization:

- Logical Structure: A hierarchical directory system organizes files into a tree-like structure with
directories (folders) and subdirectories. This logical organization mirrors real-world file organization,
making it easier to manage and locate files.

- Grouping Related Files: Files can be grouped into directories based on their type, project, or usage,
improving file management and reducing clutter.

2. Scalability:
- Efficient Management: A hierarchical system allows the file system to scale efficiently. With large
numbers of files, a flat directory system would become cumbersome and inefficient. Hierarchical
directories break down the file space into manageable sections.

3. Ease of Navigation:

- Path-Based Access: Users can navigate through directories using paths, such as
`/home/user/documents`. This allows for intuitive access to files and directories without needing to
manage a large list of files directly.

- Relative and Absolute Paths: Hierarchical directories support both relative paths (relative to the
current directory) and absolute paths (starting from the root directory), providing flexibility in file
access.

4. Permissions and Security:

- Granular Permissions: Hierarchical systems allow setting permissions at various levels (directories and
files). For example, access can be controlled at the directory level, impacting all contained files and
subdirectories.

- Inheritance: Permissions set on a parent directory can be inherited by its subdirectories and files,
simplifying security management.

5. File System Maintenance:

- Easier Backup and Recovery: Backups can be performed at the directory level, making it easier to
manage and restore specific sections of the file system.

- Efficient Searches: Hierarchical organization allows for more efficient file searching and indexing
compared to a flat structure.

Structure of a Disk

The structure of a disk is designed to organize and manage data efficiently. It is divided into several key
components:

1. Physical Components:

- Platters: Disks consist of one or more platters coated with a magnetic material. Each platter has two
surfaces that can be used for data storage.

- Read/Write Heads: Each platter surface has a read/write head that moves across the platter to read
from or write data to the disk.

- Spindle: The spindle holds the platters in place and spins them at high speeds.

2. Logical Components:

- Tracks: Each platter is divided into concentric circles called tracks. Tracks are the basic unit of data
storage on a platter.
- Sectors: Each track is further divided into small, fixed-size units called sectors. A sector is typically the
smallest unit of data read or written to the disk (e.g., 512 bytes or 4 KB).

- Clusters: A cluster (or allocation unit) is a group of contiguous sectors. File systems often use clusters
to manage disk space more efficiently, reducing overhead and fragmentation.

3. File System Components:

- Boot Sector: The boot sector contains important information for booting the operating system,
including the partition table and file system information.

- File Allocation Table (FAT) or Metadata: In FAT file systems, the FAT itself keeps track of which
clusters are allocated to which files. In other file systems, metadata structures (such as inode tables in
Unix-based systems) perform similar functions.

- Directories: Directories are special files that contain information about the files and subdirectories
they contain. They store file names, metadata (e.g., file size, creation date), and pointers to the file data.

4. Partitioning:

- Partitions: A disk can be divided into multiple partitions, each of which can be formatted with a
different file system. Partitions help organize and manage disk space and can be used for different
operating systems or storage purposes.

Conclusion:

- Hierarchical Directory System: Provides organized, scalable, and efficient management of files and
directories, making navigation, permission management, and maintenance easier.

- Disk Structure: Consists of physical components (platters, read/write heads, spindle) and logical
components (tracks, sectors, clusters) that work together to store and manage data efficiently. The file
system uses these structures to manage file storage, access, and organization.

The combination of a hierarchical directory system and a well-structured disk ensures effective data
management and efficient file system operations.

13. Differentiate between internal and external fragmentation? Suppose that we have memory of
100 KB with 5 partitions of size 150 KB, 200 KB, 250 KB, 200 KB, and 300 KB. Where the process
A and B of size 175 KB and 125 KB will be loaded, if we used Best-Fit and Worst-Fit strategy?

Answer: Following are the differences between internal and external fragmentation:
In summary, internal fragmentation results from inefficiencies within allocated blocks, while
external fragmentation arises from inefficient organization of free memory. Both types of
fragmentation can affect system performance and memory utilization, and different strategies are
used to mitigate each type.
14. What kind of problem arises with sleep and wakeup mechanism of achieving mutual exclusion?
Explain with suitable code snippet.

Answer: The sleep and wakeup mechanism is used in concurrent programming to manage access to
shared resources and achieve mutual exclusion. It is commonly implemented using semaphores or
condition variables in various programming languages. However, this mechanism can encounter
several problems if not handled properly. Key issues include:

1. Lost Wakeup Problem:

- If a process is woken up but finds that the condition it was waiting for is still not met, it might
miss the opportunity to acquire the resource or enter the critical section. This can lead to
unnecessary delays and missed opportunities for processes to execute.

2. Spurious Wakeups:

- Sometimes, a process can be woken up without any explicit signal, which can lead to incorrect
behavior if the process does not recheck the condition.

3. Deadlock:

- Improper use of sleep and wakeup mechanisms can lead to deadlock situations where processes
are indefinitely waiting for each other to release resources or signals.

4. Starvation:

- If the mechanism is not designed to ensure fairness, some processes might be continuously
denied access to the critical section, resulting in starvation.

Example Code and Problems

Consider a simple example using a semaphore-like mechanism with `sleep` and `wakeup` for mutual
exclusion in pseudocode:

// Global variables

Semaphore mutex = 1; // Initial value of 1

Semaphore resource = 0; // Initial value of 0

void process1() {

wait(mutex); // Enter critical section

// Critical section code

signal(resource); // Signal that resource is available

signal(mutex); // Exit critical section

}
void process2() {

wait(resource); // Wait for the resource to be available

wait(mutex); // Enter critical section

// Critical section code

signal(mutex); // Exit critical section

Problems in the Example

1. Lost Wakeup Problem:

- If `process2` waits for the `resource` semaphore, but `process1` signals `resource` after
`process2` has already entered its waiting state, `process2` might not receive the wakeup signal if
the signaling happens after it’s already in a waiting state. To address this, `process2` should recheck
the condition after being woken up.

void process2() {

while (true) {

wait(resource); // Wait for the resource to be available

wait(mutex); // Enter critical section

if (/* condition is not met */) {

signal(mutex); // Exit critical section

continue; // Recheck the condition

// Critical section code

signal(mutex); // Exit critical section

break; // Exit loop

2. Spurious Wakeups:
- If `process2` is woken up without the `resource` semaphore being available, it needs to recheck
the condition. This is because spurious wakeups can occur, where processes wake up without the
condition being met.

3. Deadlock:

- If both `process1` and `process2` are trying to acquire `mutex` and are dependent on each other’s
signals, a deadlock can occur if neither process can proceed because both are waiting for each
other’s signals.

4. Starvation:

- If `process1` continuously releases the `resource` semaphore and `process2` is always waiting for
the `resource`, `process2` might be starved if there is no mechanism to ensure fair access.

Addressing the Issues

1. Condition Rechecking:

- Ensure that processes recheck their conditions after waking up to handle lost wakeup problems
and spurious wakeups.

2. Fairness:

- Use advanced synchronization mechanisms such as condition variables with proper signaling and
waiting mechanisms to ensure fairness and avoid starvation.

3. Proper Use of Semaphores:

- Avoid cyclic dependencies and ensure that semaphores are used correctly to prevent deadlock.

4. Use of Condition Variables:

- In languages with built-in support for condition variables (e.g., pthreads in C/C++,
`std::condition_variable` in C++11, or `Condition` in Java), use these constructs to manage waiting
and signaling more robustly.

The sleep and wakeup mechanism can be effective for mutual exclusion but requires careful
handling to avoid problems like lost wakeups, spurious wakeups, deadlock, and starvation. By
rechecking conditions, ensuring fairness, and using appropriate synchronization constructs, these
issues can be mitigated.

15. Discuss about ostrich algorithm along with its merit and limitation.

Answer: The Ostrich Algorithm is a humorous and informal term used in the context of operating
systems and concurrent programming to describe a strategy for dealing with the problem of deadlock.
The term comes from the idea of an ostrich sticking its head in the sand to avoid danger—in other
words, ignoring the problem rather than trying to solve it.
The Ostrich Algorithm is a strategy where the system deliberately ignores the possibility of deadlock,
assuming that the problem is either unlikely to occur or is too complex to address effectively. Instead of
implementing deadlock detection or prevention mechanisms, the system chooses to focus on other
aspects of performance and stability.

Characteristics:

- No Deadlock Handling: The system does not include mechanisms for deadlock detection, prevention,
or recovery.

- Assumption of Rare Occurrence: It assumes that deadlocks are rare or unlikely to occur, so handling
them may not be worth the added complexity or overhead.

- Simplification: By ignoring the problem, the system avoids the complexity and overhead associated
with deadlock handling mechanisms.

Merit of the Ostrich Algorithm

1. Simplicity:

- Implementation Simplicity: The system remains simpler to design and implement because it avoids
the complexity of deadlock detection, prevention, or recovery algorithms.

- Reduced Overhead: There is no need for additional resources to manage deadlock situations, leading
to potentially lower overhead.

2. Performance Focus:

- Focus on Performance: The system can focus on optimizing other aspects of performance, such as
throughput and response time, without being bogged down by deadlock management.

3. Less Complexity in Code:

- Code Maintenance: The codebase is simpler and easier to maintain without the added complexity of
deadlock management mechanisms.

Limitation of the Ostrich Algorithm

1. Risk of Deadlocks:

- Unmanaged Deadlocks: If deadlocks do occur, they can lead to system failures or degraded
performance, as there is no built-in mechanism to detect or resolve them.

- Potential for System Halts: In extreme cases, deadlocks can cause system halts or unresponsiveness if
critical resources are indefinitely held.

2. Not Suitable for All Environments:


- Inappropriate for Critical Systems: The Ostrich Algorithm is not suitable for systems where deadlocks
could have severe consequences, such as real-time systems, financial systems, or critical infrastructure.

- Not Always Effective: For systems with high concurrency or complex resource management needs,
ignoring deadlock issues might not be practical or safe.

3. User Experience Impact:

- Degraded Performance: Users may experience performance issues or system hangs if deadlocks
occur, affecting overall user experience.

4. Not a Long-Term Solution:

- Temporary Measure: The Ostrich Algorithm may be acceptable as a temporary measure or for
systems with low concurrency, but it is not a sustainable solution for systems where deadlock is a
significant concern.

The Ostrich Algorithm represents a strategy where deadlock is ignored, focusing on simplicity and
performance. While it offers benefits in terms of implementation simplicity and reduced overhead, it
comes with significant risks, including unmanaged deadlocks and potential system failures. It is best
suited for systems where deadlocks are highly unlikely or where simplicity is prioritized over robustness.
For systems where deadlock management is critical, more sophisticated approaches, such as deadlock
prevention, detection, or recovery, are generally preferred.

16. Consider a system with a physical memory capacity of 32 KB. The system uses a paging scheme
with a page size of 8 KB. The page table is stored in memory, occupying 4 KB. Each page table
entry requires 8 bytes. Calculate the number of pages in the system, number of page table
entries and amount of memory consumed by the page table.

Answer: To address the problem, we need to calculate the following:

1. Number of Pages in the System

2. Number of Page Table Entries

3. Amount of Memory Consumed by the Page Table

Given:

- Physical memory capacity: 32 KB

- Page size: 8 KB

- Page table size: 4 KB

- Size of each page table entry: 8 bytes

1. Number of Pages in the System


The number of pages in the system can be calculated by dividing the total physical memory by the size
of each page.

Substituting the values:

2. Number of Page Table Entries

The number of page table entries can be calculated by dividing the size of the page table by the size of
each page table entry.

So, there are 512-page table entries.

3. Amount of Memory Consumed by the Page Table

To find out the amount of memory consumed by the page table, we can use the number of page table
entries and the size of each entry:

So, the amount of memory consumed by the page table is 4 KB.

Summary

1. Number of Pages in the System: 4 pages

2. Number of Page Table Entries: 512 entries


3. Amount of Memory Consumed by the Page Table: 4 KB

17. Consider following set of processes, with the length of the CPU-burst time and arrival time given in
milliseconds.

Process Burst time Arrival time

P1 10 0

P2 15 2

P3 22 3

P4 16 5

P5 5 6

For the given data, draw Gantt charts that illustrate the execution of these processes using SJF, SRTF and
Robin algorithms with quantum 4 milliseconds.

Answer:

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy