OS
OS
It is system software that manages computer hardware, software resources, and provides services
for programs.
• Batch OS
• Time-sharing OS
• Distributed OS
• Real-time OS
• Embedded OS
To act as an intermediary between users and computer hardware, managing resources like CPU,
memory, storage, and I/O devices.
4. What is a Kernel?
The core part of the OS responsible for managing system resources and communication between
hardware and software.
5. What is Multitasking?
The ability of an OS to run multiple tasks (processes) simultaneously by sharing resources like the
CPU.
6. What is Multiprocessing?
The use of two or more CPUs within a single computer system to execute multiple processes.
Interfaces between user-level applications and the OS that allow programs to request services like
file handling or process management.
9. What is Virtual Memory?
A memory management technique that uses both physical RAM and disk space to give an application
the illusion of having more memory.
A memory management scheme that divides memory into fixed-sized blocks (pages) for efficient
allocation.
A memory management technique that divides the memory into variable-sized segments based on
the logical division of a program.
• New
• Ready
• Running
• Waiting
• Terminated
The process of saving the state of a process and loading the state of another during multitasking.
A situation where two or more processes cannot proceed because each is waiting for resources held
by the other.
17. What are the conditions for Deadlock?
• Mutual Exclusion
• No Preemption
• Circular Wait
When a process waits indefinitely for resources because higher-priority processes are always given
preference.
An OS component that decides which process will run next on the CPU.
• FCFS (First-Come-First-Serve)
• Round Robin
• Priority Scheduling
A signal to the CPU indicating an event that needs immediate attention, like hardware failure or I/O
completion.
25. What are I/O Bound and CPU Bound Processes?
A condition where excessive paging reduces system performance as the CPU spends more time
swapping pages.
Moving processes between main memory and secondary storage to manage memory usage.
A technique of placing data in a temporary buffer for efficient device management, commonly used
in printing.
A small, high-speed memory between the CPU and RAM to store frequently accessed data.
The process of loading the operating system into memory when the computer starts.
• Microkernel: Minimal OS functionality in the kernel, other services run in user space.
Redundant Array of Independent Disks, a data storage virtualization technology for redundancy and
performance.
34. What is Disk Scheduling?
Techniques to optimize the order in which disk I/O requests are serviced.
A process can be interrupted and moved to the ready queue to allow a higher-priority process to
execute.
Increased page faults when increasing the number of page frames in certain page replacement
algorithms.
• FIFO
• Optimal
A request from a user program to the OS for a service, like file operations or process management.
47. What is the difference between User Mode and Kernel Mode?
A command-line interpreter that provides an interface between the user and the OS.
A technique where devices transfer data to/from memory without CPU intervention.
57. What is the difference between Hard and Soft Real-Time Systems?
1. What's the main purpose of an OS? What are the different types of OS?
The main purpose of an OS is to execute user programs and make it easier for users to
understand and interact with computers as well as run applications. It is specially designed
to ensure that the computer system performs better by managing all computational
activities. It also manages computer memory, processes, and operation of all hardware
and software.
Types of OS:
A Multiprocessor system is a type of system that includes two or more CPUs. It involves
the processing of different computer programs at the same time mostly by a computer
system with two or more CPUs that are sharing single memory.
Benefits:
• Such systems are used widely nowadays to improve performance in systems that
are running multiple programs concurrently.
• By increasing the number of processors, a greater number of tasks can be
completed in unit time.
• One also gets a considerable increase in throughput and is cost-effective also as all
processors share the same resources.
• It simply improves the reliability of the computer system.
It is generally a program that initializes OS during startup i.e., first code that is executed
whenever computer system startups. OS is loaded through a bootstrapping process or
program commonly known as booting. Overall OS only depends on the bootstrap program
to perform and work correctly. It is fully stored in boot blocks at a fixed location on the
disk. It also locates the kernel and loads it into the main memory after which the program
starts its execution.
It refers to the ability to execute or perform more than one program on a single processor
machine. This technique was introduced to overcome the problem of underutilization of
CPU and main memory. In simple words, it is the coordination of execution of various
programs simultaneously on a single processor (CPU). The main objective of
multiprogramming is to have at least some processes running at all times. It simply
improves the utilization of the CPU as it organizes many jobs where the CPU always has
one to execute.
Multitasking Multiprocessing
It performs more than one task at a time It performs more than one task at a time
using a single processor. using multiple processors.
In this, the number of CPUs is more than
In this, the number of CPUs is only one.
one.
It is more economical. It is less economical.
It is less efficient than multiprocessing. It is more efficient than multitasking.
It allows smooth processing of multiple
It allows fast switching among various tasks.
tasks at once.
It requires more time to execute tasks as It requires less time for job processing as
compared to multiprocessing. compared to multitasking.
1. New
A program which is going to be picked up by the OS into the main memory is called a new
process.
2. Ready
Whenever a process is created, it directly enters in the ready state, in which, it waits for
the CPU to be assigned. The OS picks the new processes from the secondary memory and
put all of them in the main memory.
The processes which are ready for the execution and reside in the main memory are called
ready state processes. There can be many processes present in the ready state.
3. Running
One of the processes from the ready state will be chosen by the OS depending upon the
scheduling algorithm. Hence, if we have only one CPU in our system, the number of
running processes for a particular time will always be one. If we have n processors in the
system then we can have n processes running simultaneously.
4. Block or wait
From the Running state, a process can make the transition to the block or wait state
depending upon the scheduling algorithm or the intrinsic behavior of the process.
When a process waits for a certain resource to be assigned or for the input from the user
then the OS move this process to the block or wait state and assigns the CPU to the other
processes.
5. Completion or termination
When a process finishes its execution, it comes in the termination state. All the context of
the process (Process Control Block) will also be deleted the process will be terminated by
the Operating system.
6. Suspend ready
A process in the ready state, which is moved to secondary memory from the main memory
due to lack of the resources (mainly primary memory) is called in the suspend ready state.
If the main memory is full and a higher priority process comes for the execution then the
OS have to make the room for the process in the main memory by throwing the lower
priority process out into the secondary memory. The suspend ready processes remain in
the secondary memory until the main memory gets available.
7. Suspend wait
Instead of removing the process from the ready queue, it's better to remove the blocked
process which is waiting for some resources in the main memory. Since it is already
waiting for some resource to get available hence it is better if it waits in the secondary
memory and make room for the higher priority process. These processes complete their
execution once the main memory gets available and their wait is finished.
In Multiprogramming systems, the Operating system schedules the processes on the CPU
to have the maximum utilization of it and this procedure is called CPU scheduling. The
Operating System uses various scheduling algorithm to schedule the processes.
11.What are the conditions for a Race Condition, and how can it be avoided?
Conditions: A race condition occurs when:
Two or more processes access shared resources simultaneously.
The execution order affects the program's outcome.
Avoidance: Use synchronization mechanisms like locks, semaphores, or monitors to control
resource access.
12.What is a Semaphore, and how does it work?
A semaphore is a synchronization tool used to manage access to shared resources.
Types: Binary (0 or 1) and Counting (any value).
Working:
Wait(P): Decreases the semaphore value; blocks if the value is zero.
Signal(V): Increases the semaphore value, allowing blocked processes to proceed.
13.What is the difference between Mutex and Semaphore?
19. What are the differences between Physical and Virtual Memory?
Physical Memory: Refers to the actual hardware RAM installed in the system.
Virtual Memory: A memory management technique that uses a combination of physical
RAM and disk space to simulate more memory than physically available.
Paging is a memory management scheme that divides physical memory into fixed-size blocks
called pages and logical memory into blocks of the same size. It eliminates external
fragmentation and allows processes to use non-contiguous memory.
A page fault occurs when a process tries to access a page not currently in physical memory.
Handling:
OS pauses the process.
Brings the required page from disk to memory.
Restarts the process.
Thrashing: Excessive paging activity occurs when a system spends more time swapping
pages than executing processes.
Prevention:
Adjust the degree of multiprogramming.
Increase physical memory.
Use working set models to allocate sufficient memory to processes.
26. Paging in Memory Management full details
Paging is a memory management scheme that eliminates the need for contiguous allocation
of physical memory, making memory utilization more efficient. It divides the process's logical
memory and physical memory into fixed-size blocks called pages and frames, respectively.
Steps in Paging
1. The CPU generates a logical address.
2. The page number is extracted and used to index the page table.
3. The page table provides the corresponding frame number in physical memory.
4. The frame number is combined with the offset to get the physical address.
5. The physical address is used to access the desired location in RAM.
Example of Paging
Given:
• Logical Memory = 16 pages (each page is 1 KB).
• Physical Memory = 8 frames (each frame is 1 KB).
Logical to Physical Mapping:
0 5
1 3
2 1
3 6
... ...
Logical Address:
• CPU generates a logical address: Page Number = 2, Offset = 200 bytes\text{Page
Number = 2, Offset = 200 bytes}Page Number = 2, Offset = 200 bytes.
Translation:
1. Page table shows Page 2 maps to Frame 1.
2. Physical Address = Frame 1+Offset 200\text{Frame 1} + \text{Offset
200}Frame 1+Offset 200.
3. Physical Address = 1×1024+200=12241 \times 1024 + 200 = 12241×1024+200=1224.
Advantages of Paging
1. Eliminates External Fragmentation: Any free frame can be allocated.
2. Efficient Memory Use: No need for contiguous allocation.
3. Ease of Swapping: Pages can be swapped in and out of memory independently.
Disadvantages of Paging
1. Internal Fragmentation: Some memory within a frame may remain unused.
2. Overhead: Maintaining and accessing the page table requires additional time and
memory.
3. Page Faults: If the required page is not in memory, fetching it from disk can slow
down execution.