0% found this document useful (0 votes)
0 views

OS Material

The document discusses various aspects of operating systems, including resource management, system calls, deadlocks, scheduling, and memory management. It outlines the differences between hard and soft real-time operating systems, monolithic and microkernel architectures, and the benefits of multithreading. Additionally, it covers the purpose of interrupts, types of system calls, and Direct Memory Access modes, providing a comprehensive overview of operating system functionalities and structures.

Uploaded by

vicemkr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

OS Material

The document discusses various aspects of operating systems, including resource management, system calls, deadlocks, scheduling, and memory management. It outlines the differences between hard and soft real-time operating systems, monolithic and microkernel architectures, and the benefits of multithreading. Additionally, it covers the purpose of interrupts, types of system calls, and Direct Memory Access modes, providing a comprehensive overview of operating system functionalities and structures.

Uploaded by

vicemkr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

OS CAT – 1 Material

1. List out some system calls required to control the communication system.
Types of System Calls

2. Is OS a Resource Manager? Justify your answer.

Resource Management in Operating System is the process to manage all the resources
efficiently like CPU, memory, input/output devices, and other hardware resources among
the various programs and processes running in the computer.

Resource management is an important thing because resources of a computer are limited and
multiple processes or users may require access to the same resources like CPU, memory etc. at
the same time. The operating system has to manage and ensure that all processes get the
resources they need to execute, without any problems like deadlocks.

Here are some Terminologies related to the resource management in OS:

 Resource Allocation: These terms defines the process of assigning the available
resources to processes in the operating system. This can be done dynamically or
statically.
 Resource: Resource can be anything that can be assigned dynamically or statically in the
operating system. Example may include CPU time, memory, disk space, and network
bandwidth etc.
 Resource Management: It refers to how to manage resources efficiently between
different processes.
 Process: Process refers to any program or application that is being executed in the
operating system and has its own memory space, execution state, and set of system
resources.
 Scheduling: It is the process of determining from multiple numbers of processes which
process should be allocated a particular resource at a given time.
 Deadlock: When two or more processes are waiting for some resource but resources are
busy somewhere else and resources are also waiting for some process to complete their
execution. In such condition neither resources will be freed nor process would get it and
this situation is called deadlock.
 Semaphore: It is the method or tool which is used to prevent race condition. Semaphore
is an integer variable which is used in mutual exclusive manner by various concurrent
cooperative processes in order to achieve synchronization.
 Mutual Exclusion: It is the technique to prevent multiple number of process to access
the same resources simultaneously.
 Memory Management: Memory management is a method used in the operating systems
to manage operations between main memory and disk during process execution.

3. Consider a logical address space of eight pages of 1024 words each, mapped onto a
physical memory of 32 frames. How many bits are there in the logical address and
in the physical address?
There are 8 pages in logical address space, so, 2^3 = 8, then page size of 3-bits
we have 1024 words (1 word = 2-bytes) then, 1024 * 2 = 2048 bytes
which we can say that 2^11 = 2048, then so there are 11 + 3 = 14-bits are the total
number of bits in a logical address.
Now coming towards the Physical address:
we have 32 frames so 2^5 = 32, we have 5-bits for frame + 11 bits = 16-bits, then we
have 16-bits for our physical address.

4. Define deadlock. What are the schemes used in operating system to


handle deadlocks?

A deadlock in operating systems occurs when two or more processes are unable to
proceed because each is waiting for the other to release a resource they need. This creates
a circular waiting condition where no progress can be made. Deadlocks can occur in
systems where resources are shared, such as in multiprocessing or distributed systems.

Here are some common schemes used in operating systems to handle deadlocks:

1. Deadlock Prevention.
2. Deadlock Avoidance.
3. Deadlock Detection and Recovery.
4. Resource Allocation Graphs.
5. Timeouts and Deadlock Prevention Protocols.

5. List the three requirements that must be satisfied by the critical - section
problem.

The critical section problem refers to the situation in concurrent programming


where multiple processes or threads share a common resource, and they need to
execute a critical section of code that accesses or manipulates this resource. To
ensure correct and synchronized access to the shared resource, three requirements
must be satisfied:

1. Mutual Exclusion: Only one process or thread can execute the critical section at a time.
This ensures that concurrent access does not result in inconsistencies or conflicts due to
simultaneous modifications.
2. Progress: If no process is executing in its critical section and some processes wish to
enter their critical sections, then only those processes that are not executing in their
remainder section can participate in deciding which will enter the critical section next,
and this selection cannot be postponed indefinitely. In other words, progress ensures that
if a process is not in its critical section and there are other processes waiting to enter their
critical sections, one of those processes will eventually be allowed to enter.
3. Bounded Waiting (or No Starvation): There exists a bound, or limit, on the number of
times other processes are allowed to enter their critical sections after a process has made
a request to enter its critical section and before that request is granted. This ensures that a
process will eventually be granted permission to enter its critical section, preventing
indefinite postponement or starvation.

6. Differences between hard real-time and soft real-time operating systems:

Feature Hard Real-Time OS Soft Real-Time OS


Guarantees are provided with lower priority
Timing Strict and deterministic timing
and with some tolerance for missed
Guarantees guarantees.
deadlines.
Deadlines are important but may be missed
Deadline Critical tasks must meet deadlines
occasionally without catastrophic
Enforcement strictly.
consequences.
Task priorities are assigned based on
Critical tasks have the highest
Task Priority importance, but critical tasks may not always
priority.
have the highest priority.

Typically use priority-based


Scheduling May use various scheduling algorithms such
scheduling with fixed priority
Algorithms as priority-based or time-sharing scheduling.
preemptive scheduling.

Very low and predictable response Response times may vary depending on
Response Time
times. system load and resource availability.

Resources are managed to optimize overall


Resource Resources are managed to ensure
system performance, with some
Management timely execution of critical tasks.
consideration for timing constraints.

Aircraft flight control systems,


Multimedia applications, online gaming,
Examples medical devices, automotive
interactive systems.
control systems.

7. Differences between monolithic and microkernel architectures:


Feature Monolithic Kernel Microkernel
Only essential services run in kernel
Architecture All OS services run in kernel space.
space; other services run in user space.
Typically larger due to inclusion of all Smaller kernel size due to minimalistic
Size
services within the kernel. design.
Lower complexity as most services are
Higher complexity as all services are
Complexity implemented as separate user-space
tightly integrated within the kernel.
processes.
Lower modularity; difficult to add or Higher modularity; easier to add or
Modularity remove functionalities without remove services without affecting kernel
impacting kernel stability. stability.
More prone to system crashes due to Generally more reliable due to isolated
Reliability
tightly coupled components. components and fault containment.
Potentially better performance due to May have slightly lower performance due
Performance direct access to kernel services without to communication overhead between user-
inter-process communication overhead. space servers and the kernel.
Potentially more secure due to isolation of
Less secure as a bug or vulnerability in
services; vulnerabilities are often
Security any component can impact the entire
contained within individual user-space
system.
servers.
Examples Linux, Unix, FreeBSD MINIX, QNX, L4 microkernel, Mach

8. Can a multithreaded solution using multiple user-level threads achieve better


performance on a multiprocessor system than on a single-processor system?
Answer: A multithreaded system comprising of multiple user-level threads cannot make use of
the different processors in a multiprocessor system simultaneously. The operating system sees
only a single process and will not schedule the different threads of the process on separate
processors. Consequently, there is no performance benefit associated with executing multiple
user-level threads on a multiprocessor system.
9. What is the purpose of interrupts? What are the differences between a trap and an
interrupt? Can traps be generated intentionally by a user program? If so, for what
purpose?
Answer: An interrupt is a hardware-generated change-of-flow within the system. An interrupt
handler is summoned to deal with the cause of the interrupt; control is then returned to the
interrupted context and instruction. A trap is a software-generated interrupt. An interrupt can be
used to signal the completion of an I/O to obviate the need for device polling. A trap can be used
to call operating system routines or to catch arithmetic errors.

10. Differentiate Short-term, Medium- Term and Long-Term Scheduling.

Short-Term
Feature Medium-Term Scheduler Long-Term Scheduler
Scheduler
Allocates CPU to
Manages process memory by Controls admission of
Purpose ready processes
swapping processes. processes into the system.
frequently.
Functionality Selects which Decides which processes Selects which processes from
process from the should be swapped in and the pool of incoming processes
ready queue will out of main memory based should be admitted into the
execute next. on memory requirements and system for execution.
Short-Term
Feature Medium-Term Scheduler Long-Term Scheduler
Scheduler
priority.
Executes less frequently than
Executes frequently,
short-term scheduler, Executes infrequently, typically
typically after every
typically when system when system wants to control
Frequency time slice or when a
encounters memory pressure degree of multiprogramming or
process completes its
or a process is blocked for a when new process arrives.
CPU burst.
long time.
Round-robin Memory management in Admission scheduling in batch
Example
scheduling algorithm. Windows OS. processing systems.

11. Benefits of Multi - Threading

 Improved Responsiveness
 Enhanced Performance
 Resource Sharing
 Simplified Design
 Concurrency Control
 Asynchronous I/O
 Scalability
 Fault Isolation

12. Platform Independence Various types of I/O communications

 Synchronous I/O
 Asynchronous I/O
 Block I/O
 Character I/O
 Memory-mapped I/O
 Direct Memory Access (DMA)
 Interrupt-driven I/O

13. Instruction Cycle:


14. Consider the following code segment:
int main (void) {
for (int i=2; i<=5; i++) {
fork( );
}
printf (“GOCLASSES”);

How many times “GOCLASSES” is printed by the above code?

int main (void)


{
fork( );
fork( );
fork( );
fork( );
printf (“GOCLASSES”);
}

We know total no of processes= 2 ^ n where n = no of forks


Therefore total processes= 2 ^ 4 = 16

15. What does the CPU will do when there are no user programs to run?

When there are no user programs to run, the CPU typically enters an idle state. In this state, the
CPU may perform various tasks depending on the specific system and its configuration. Some
common actions the CPU may take when idle include:

1. Halting Execution
2. Executing Idle Tasks
3. Waiting for Interrupts
4. Performing System Maintenance
5. Executing Operating System Processes
6. Power Management

Even when there are no user programs running, the CPU may still be engaged in various
activities to ensure the smooth operation of the system, handle system-level tasks, and
conserve energy when possible.

16. What system calls have to be executed by a command interpreter or shell in order to
start a new process?

 fork(): The fork() system call is used to create a new process, known as the child process,
which is an exact copy of the parent process (the shell).
 exec(): After forking a new process, the shell typically replaces the child process with a
new program using one of the exec() family of system calls, such as execve(), execl(),
execvp(), etc.
 wait(): The wait() system call is used by the parent process (the shell) to wait for the
child process to terminate.
 exit(): Once the child process has completed its execution, it typically exits using the
exit() system call.

17. Draw and explain the structure of Direct Memory Access and cache in detail.
Modes of Operation

Direct Memory Access works differently in different modes of operation.

Burst Mode: In burst mode, the full data block is transmitted in a continuous sequence. Once the
CPU allows the DMA controller to access to the system bus, the DMA controller will transfer all
bytes of data in the data block before releasing control of the system buses back to the CPU, but
it will cause the CPU to be inactive for a considerable long time. This mode is also called “Block
Transfer Mode”.

Cycle Stealing Mode: The cycle stealing mode is used in a system where the CPU cannot be
disabled for the length of time required for the burst transfer mode. In the cycle stealing mode,
the DMA controller obtains the access to the system bus by using the BR (Bus Request) and BG
(Bus Grant) signals, which are the same as the burst mode. These two signals control the
interface between the CPU and the DMA controller.

On the one hand, in the cycle stealing mode, the data block transmission speed is not as fast as in
the burst mode, but on the other hand, the CPU idle time is not as long as in the burst mode.

Transparent Mode: The transparent mode takes the longest time to transfer data blocks, but it is
also the most efficient mode in terms of overall system performance. In transparent mode, the
Direct Memory Access controller transfers data only when the CPU performs operations that do
not use the system buses.

The main advantage of transparent mode is that the CPU never stops executing its programs, and
Direct Memory Access transfers are free in terms of time, while the disadvantage is that the
hardware needs to determine when the CPU is not using the system buses, which can be
complicated. This is also called “hidden DMA data transfer mode”.

18. Summarize the Operating System Structure in detail with neat sketch.
Micro Kernel Approaches

A microkernel is a type of operating system kernel that is designed to provide only the most
basic services required for an operating system to function, such as memory management and
process scheduling. Other services, such as device drivers and file systems, are implemented as
user-level processes that communicate with the microkernel via message passing. This design
allows the operating system to be more modular and flexible than traditional monolithic kernels,
which implement all operating system services in kernel space.
19. Illustrate the concept of Inter Process Communication.
20. Infer the concept of contiguous memory allocation with a neat diagram.
21. Summarize the various types of system calls with an example for each.

A system call is a method for a computer program to request a service from the kernel of the
operating system on which it is running. A system call is a method of interacting with the
operating system via programs. A system call is a request from computer software to an
operating system's kernel.

The Application Program Interface (API) connects the operating system's functions to user
programs. It acts as a link between the operating system and a process, allowing user-level
programs to request operating system services. The kernel system can only be accessed using
system calls. System calls are required for any programs that use resources.
The following list categorizes system calls based on their functionalities:
1. Process Control
 System calls play an essential role in controlling system processes. They enable you to:
 Create new processes or terminate existing ones.
 Load and execute programs within a process's space.
 Schedule processes and set execution attributes, such as priority.
 Wait for a process to complete or signal upon its completion.
2. File Management
 System calls support a wide array of file operations, such as:
 Reading from or writing to files.
 Opening and closing files.
 Deleting or modifying file attributes.
 Moving or renaming files.
3. Device Management
 System calls can be used to facilitate device management by:
 Requesting device access and releasing it after use.
 Setting device attributes or parameters.
 Reading from or writing to devices.
 Mapping logical device names to physical devices.
4. Information Maintenance
This type of system call enables processes to:
 Retrieve or modify various system attributes.
 Set the system date and time.
 Query system performance metrics.
5. Communication
The communication call type facilitates:
 Sending or receiving messages between processes.
 Synchronizing actions between user processes.
 Establishing shared memory regions for inter-process communication.
 Networking via sockets.
6. Security and Access Control
System calls contribute to security and access control by:
 Determining which processes or users get access to specific resources and who can read,
write, and execute resources.
 Facilitating user authentication procedures.

22. Justify the critical section problem and write the algorithm for producer consumer
problem.

 Mutual Exclusion: Only one process should be allowed to execute in the critical section
at any given time to prevent concurrent access and potential data corruption.
 Progress: If no process is executing in the critical section and some processes are waiting
to enter, only those processes that are not executing in their remainder sections should be
allowed to enter the critical section. This ensures that progress is made and prevents
deadlock.
 Bounded Waiting: There should be a limit on the number of times other processes are
allowed to enter the critical section after a process has made a request to enter. This
prevents starvation, ensuring that all processes eventually get access to the critical
section.

Algorithm for the Producer-Consumer Problem: The Producer-Consumer Problem is a classic


synchronization problem involving two types of processes: producers that produce items and
place them into a shared buffer, and consumers that consume items from the buffer. The problem
arises when producers and consumers access the shared buffer concurrently, leading to potential
synchronization issues such as race conditions and buffer overflows or underflows. Here's a
solution to the Producer-Consumer Problem using semaphores:

# Shared variables
buffer = [] # Shared buffer
mutex = Semaphore(1) # Semaphore to enforce mutual exclusion
full = Semaphore(0) # Semaphore to track the number of full slots in the buffer
empty = Semaphore(N) # Semaphore to track the number of empty slots in the buffer, where N
is the buffer size
# Producer process
def producer():
while True:
item = produce_item() # Produce item
empty.acquire() # Wait if buffer is full
mutex.acquire() # Enter critical section
buffer.append(item) # Add item to buffer
mutex.release() # Exit critical section
full.release() # Increment full slots in buffer

# Consumer process
def consumer():
while True:
full.acquire() # Wait if buffer is empty
mutex.acquire() # Enter critical section
item = buffer.pop(0) # Remove item from buffer
mutex.release() # Exit critical section
empty.release() # Increment empty slots in buffer
consume_item(item) # Consume item

The critical section problem arises in concurrent computing when multiple processes or threads
access a shared resource or section of code and at least one process needs to modify the resource.
The critical section problem involves ensuring that only one process can access the critical
section at a time to prevent race conditions, data corruption, and other synchronization issues.

23. It is sometimes difficult to achieve a layered approach if two components of the


Operating systems are dependent on each other. Identify a scenario in which it is
unclear how to layer two system components that require tight coupling of their
functionalities.

The virtual memory subsystem and the storage subsystem are typically tightly coupled and
require careful design in a layered system due to the following interactions. Many systems
allow files to be mapped into the virtual memory space of an executing process. On the other
hand, the virtual memory subsystem typically uses the storage system to provide the backing
store for pages that do not currently reside in memory. Also, updates to the file system are
sometimes buffered in physical memory before it is flushed to disk, thereby requiring careful
coordination of the usage of memory between the virtual memory subsystem and the file
system.

A computer can address more memory than the amount physically installed on the system. This
extra memory is actually called virtual memory and it is a section of a hard disk that's set up to
emulate the computer's RAM.

The main visible advantage of this scheme is that programs can be larger than physical memory.
Virtual memory serves two purposes. First, it allows us to extend the use of physical memory by
using disk. Second, it allows us to have memory protection, because each virtual address is
translated to a physical address.
Following are the situations, when entire program is not required to be loaded fully in main
memory.

 User written error handling routines are used only when an error occurred in the data or
computation.
 Certain options and features of a program may be used rarely.
 Many tables are assigned a fixed amount of address space even though only a small
amount of the table is actually used.
 The ability to execute a program that is only partially in memory would counter many
benefits.
 Less number of I/O would be needed to load or swap each user program into memory.
 A program would no longer be constrained by the amount of physical memory that is
available.
 Each user program could take less physical memory; more programs could be run the
same time, with a corresponding increase in CPU utilization and throughput.

Modern microprocessors intended for general-purpose use, a memory management unit, or


MMU, is built into the hardware. The MMU's job is to translate virtual addresses into physical
addresses. A basic example is given below −

Virtual memory is commonly implemented by demand paging. It can also be implemented in a


segmentation system. Demand segmentation can also be used to provide virtual memory.

24. CPU Scheduling algorithm (With and Without Arrival Time, fractional number too
Like: 5.9) Ref Class Notes.
25. Bankers Algorithm (Need of Matrix, Safe or not? New request granted or not?
Justify. Ref Class Notes.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy