0% found this document useful (0 votes)
38 views

Os Last Min Notes Operating System

operating system last min notes

Uploaded by

ONLy boss
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Os Last Min Notes Operating System

operating system last min notes

Uploaded by

ONLy boss
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Operating System Concepts - Interview

Guide
1. Operating System (OS) & Types

● Definition: Software that manages computer hardware and software resources and
provides common services for applications.
● Types:
○ Batch OS: Processes batches of jobs with no interaction.
○ Time-Sharing OS: Allows multiple users to interact with programs simultaneously.
○ Distributed OS: Manages a group of separate computers and makes them act as
one system.

2. Purpose of an OS

● Resource Management: Allocates CPU, memory, and I/O devices to processes.


● Task Management: Manages tasks like process scheduling, multitasking, and resource
sharing.
● User Interface: Provides a user-friendly way to interact with the system (GUI or
command-line).

3. Real-Time Operating System (RTOS) & Types

● Definition: An OS designed for real-time applications where responses are needed within
a specific time.

4. Program, Process and Thread

● A process is essentially a program that is currently under execution.


● The main function of an Operating System (OS) is to manage and handle all processes.

When a program is loaded into memory and becomes a process, it can be divided into four
sections:
● Stack: Stores temporary data such as function parameters, return addresses, and local
variables.
● Heap: Dynamically allocated memory during the execution of the program.
● Text: Contains the compiled code of the program.
● Data: Holds global and static variables.

There are two types of processes: Operating System Processes and User Processes

Program Process

Contains a set of instructions designed Instance of an executing program.


to complete a specific task.

Passive entity that resides in secondary Active entity created during execution, loaded
memory. into main memory.

Exists in a single place until deleted. Exists for a limited time, terminating after task
completion.

Static entity with no resource Dynamic entity with high resource


requirement. requirements, needing CPU, memory, and I/O.

● Process Table: The operating system manages processes, allocating processor time
and resources like memory and disks. The process table is maintained by the OS
to keep track of all processes, including their states and resources they are using.
● Thread: A thread is a single sequence of execution within a process, often
called a lightweight process. Threads improve application performance through
parallelism. For example, a web browser can use multiple threads for different tabs.

Process Thread

A process is a program in A thread is a smaller unit within a process.


execution.
Processes are isolated from each Threads share memory within the same process.
other.

Process communication is less Thread communication is more efficient.


efficient.

Processes are considered Threads are considered lightweight.


heavyweight.

Process switching requires OS Thread switching does not require OS intervention.


intervention.

One blocked process doesn't affect A blocked thread can affect other threads in the
others. same process.

5. PCB, Socket, Shell, Kernel, and Monolithic Kernel

● Process Control Block (PCB) tracks the execution status of a process,


containing information like registers, priority, and execution state. The process
table is an array of PCBs, each representing a process in the system.
● Socket: An endpoint for sending/receiving data over a network.
● Shell: Interface to access OS services, either via command-line or GUI.
● Kernel: The kernel is the core component of an operating system, managing
memory, CPU time, and hardware operations. It acts as a bridge
between applications and hardware-level data processing.

Kernel Operating System (OS)

Central component of the OS. System software that manages system resources.

Converts user commands into Manages overall system resources and


machine-level instructions. operations.
Acts as an interface between Acts as an interface between hardware and the
hardware and applications. user.

Handles process, file, device Provides data security, user access control, and
management, and I/O. privacy.

Types include Microkernel, Monolithic Types include Single and Multiprogramming,


kernel, etc. Distributed OS, Real-time OS.

● Monolithic Kernel: A monolithic kernel manages system resources and implements


user and kernel services in the same address space, making the OS execution faster
but increasing its size. It handles CPU scheduling, memory management, file
management,
and other functions through system calls.

Microkernel Monolithic Kernel

Kernel and user services are in separate Kernel and user services are in the same
address spaces. address space.

Smaller in size compared to a monolithic Larger in size than a microkernel.


kernel.

More easily extendible. Harder to extend.

Service crashes do not affect the A service crash can cause the entire
microkernel's operation. system to crash.

Uses message queues for inter-process Uses signals and sockets for inter-process
communication (IPC). communication.

6. Multitasking vs. Multithreading


Multi-threading Multi-tasking

Multiple threads are executed at the same Several programs (or tasks) are executed
time within the same or different parts of a concurrently.
program.

The CPU switches between multiple The CPU switches between multiple tasks
threads. or processes.

It is a lightweight process, involving parts of It is a heavyweight process, involving


a single process. multiple processes.

Multi-threading is a feature of the process, Multitasking is a feature of the OS,


allowing it to split into threads. enabling it to handle multiple applications.

Involves sharing computing resources Involves sharing computing resources


among threads of a single process. (CPU, memory, devices) among multiple
processes.

7. Multitasking vs. Multiprocessing

Multitasking Multiprocessing

Performs multiple tasks using a single Performs multiple tasks using multiple
processor. processors.

Has only one CPU. Has more than one CPU.

More economical. Less economical.

Less efficient than multiprocessing. More efficient than multitasking.


Allows fast switching between tasks. Allows smooth simultaneous task
processing.

Requires more time to execute tasks. Requires less time for task processing.

8. Process States and Queues

● Process States:
Different states that a process goes through include:
○ New State: The process is just created.
○ Running: The CPU is actively executing the process's instructions.
○ Waiting: The process is paused, waiting for an event to occur.
○ Ready: The process has all necessary resources and is waiting for CPU
assignment.
○ Terminate: The process has completed execution and is finished.
● Process Queues:
○ Ready Queue: Holds processes that are ready for CPU time.
○ Waiting Queue: Holds processes that are waiting for I/O operations.

9. Inter-Process Communication (IPC)

● Purpose: Allows processes to communicate and share data.


● Techniques: Includes pipes, message queues, shared memory, and semaphores.

10. Dynamic Binding

● Definition: Linking a function or variable at runtime rather than at compile-time.


● Advantage: Flexibility in program behavior and memory use.

11. Swapping

● Definition: Moving processes between main memory and disk storage.


● Purpose: Frees up memory for active processes, improving system performance.

12. Context Switching


● Definition: Context switching involves saving the state of a currently running process and
loading the saved state of a new process. The process state is stored in the
Process Control Block, allowing the old process to resume from where it left off.
● Overhead: Increases CPU load but allows multitasking.

13. Zombie Process & Orphan Process

● Zombie Process: A terminated process still occupying memory until the parent
acknowledges it.
● Orphan Process: A child process without a parent, often adopted by the init system in
Unix-based OS.

14. RAID (Redundant Array of Independent Disks)

● Definition: A method of storing data across multiple disks for redundancy or performance.
● Types: RAID 0 (striping), RAID 1 (mirroring), RAID 5 (striping with parity), etc.

15. Starvation and Aging

● Starvation: when a process does not get the resources it needs for a long
time because other processes are prioritized.
● Aging: Gradually increases priority of waiting processes to prevent starvation.

16. Scheduling Algorithms

● Purpose: Determines the order in which processes access the CPU.


● Types: Includes FCFS, Round Robin, Priority Scheduling, etc.

17. Preemptive vs. Non-Preemptive Scheduling

● Preemptive: OS can interrupt and reassign CPU from a running process.


● Non-Preemptive: Once a process starts, it runs until completion or voluntary release of
CPU.

18. FCFS & Convoy Effect

● FCFS (First-Come, First-Serve): FCFS schedules jobs in the order they arrive in the
ready queue. It is non-preemptive, meaning a process holds the CPU until it terminates
or performs I/O, causing longer jobs to delay shorter ones.
● Convoy Effect: Occurs in FCFS when a long process delays others behind it.

19. Round Robin Scheduling

● Definition: Schedules processes in a time slice or quantum, rotating through processes to


ensure fair allocation of CPU time and preventing starvation. It is cyclic and does not
prioritize any process.
● Advantage: Fair and efficient for time-sharing systems.

20. Priority Scheduling

● Definition: Processes assigned CPU based on priority levels.


● Challenge: Risk of starvation for lower-priority processes.

21. Concurrency

● Definition: Multiple processes appear to run simultaneously.


● Achieved By: Multithreading or multitasking within a single CPU.

22. Race Condition

● Definition: Two processes access shared data simultaneously, leading to unexpected


results.
● Solution: Use locks or synchronization mechanisms.

23. Critical Section

● Definition: A part of code that accesses shared resources and must not be executed by
more than one process at a time.

24. Synchronization techniques?


- Mutexes
- Condition variables
- Semaphores
- File locks

● Mutex: Only allows one process at a time, preventing concurrent access.


● Conditional Variable: A variable used to control access in multithreading, allowing threads
to wait until certain conditions are met.
● Semaphore: Allows multiple processes to access resources up to a limit.

25. Semaphore in OS

A Semaphore is a synchronization tool used in operating systems to manage access to shared


resources in multi-threaded or multi-process systems. It keeps a count of available resources
and uses two atomic operations, wait() and signal(), to control access.

Types of Semaphores:

1. Binary Semaphore:
○ Has values 0 or 1.
○ Signals availability of a single resource.
2. Counting Semaphore:
○ Can have values greater than 1.
○ Controls access to multiple instances of a resource, like a pool of connections.

Binary Semaphore vs Mutex:

Binary Semaphore Mutex

Signals availability of a shared resource (0 Allows mutual exclusion with a single lock.
or 1).

Uses signaling mechanisms. Uses a locking mechanism.

Faster in some cases with multiple Slower when frequently contended.


processes.

Integer variable holding 0 or 1. Object holding lock state and lock owner
info.

26. Binary vs. Counting Semaphores

● Binary Semaphore: Only two values (0 or 1), similar to a lock.


● Counting Semaphore: Allows more flexibility with a range of values for managing multiple
resources.

27. Producer-Consumer Problem

● Definition: Synchronization issue where producer and consumer processes access


shared data.
● Solution: Use semaphores or mutexes to control access.

28. Belady’s Anomaly

● Definition: An increase in page faults despite increasing memory pages in certain


algorithms.
● Occurs In: FIFO page replacement algorithm.

29. What is a Deadlock in OS?

● A deadlock is a situation in which a set of processes are blocked because each process
holds resources and waits to acquire additional resources held by another process.
● In this scenario, two or more processes are unable to proceed because they are waiting
for each other to release resources.
● Deadlocks commonly occur in multiprocessing environments and can result in the system
becoming unresponsive.

Necessary Conditions for Deadlock

1. Mutual Exclusion: Resources cannot be shared; at least one resource must be held in a
non-shareable mode.
2. Hold and Wait: Processes holding resources are allowed to wait for additional resources.
3. No Pre-emption: Resources cannot be forcibly taken from a process; they must be
voluntarily released.
4. Circular Wait: There exists a set of processes such that each process is waiting for a
resource held by the next process in the cycle.

30. Banker’s Algorithm

● Purpose: Deadlock avoidance algorithm in resource allocation.


● Method: Checks if resources can be safely allocated without causing deadlock.
31. Methods for Handling Deadlock

1. Deadlock Prevention:
○ Ensure that at least one necessary condition for deadlock cannot hold.
○ Allow resource sharing (Mutual Exclusion).
○ Require all resources to be requested upfront (Hold and Wait).
○ Permit resource preemption (No Pre-emption).
○ Impose a strict order for resource allocation (Circular Wait).
2. Deadlock Avoidance:
○ Dynamically examine resource allocation to prevent circular wait.
○ Use the Banker’s Algorithm to determine safe states; deny requests that would
lead to an unsafe state.
3. Deadlock Detection:
○ Allow the system to enter a deadlock state, then detect it.
○ Use a Wait-for Graph to represent wait-for relationships; a cycle indicates a
deadlock.
○ Employ a Resource Allocation Graph to check for cycles and determine the
presence of deadlock.
4. Deadlock Recovery:
○ Terminate one or more processes involved in the deadlock (abruptly or
gracefully).
○ Use resource preemption to take resources from processes and allocate them to
others to break the deadlock.

32. Logical vs. Physical Address Space

Parameter Logical Address Physical Address

Basic Generated by the CPU. Located in a memory unit.

Address Set of all logical addresses Set of all physical addresses


Space generated by the CPU. corresponding to logical addresses.
Visibility Visible to the user. Not visible to the user.

Generation Created by the CPU. Computed by the Memory Management


Unit (MMU).

33. Memory Management Unit (MMU)

● Definition: Hardware that translates logical to physical addresses.

34. Main vs. Secondary Memory

Here are six important differences between primary memory and secondary memory, presented
succinctly:

Primary Memory Secondary Memory

Used for temporary data storage while Used for permanent data storage, retaining
the computer is running. information long-term.

Faster access speed as it is directly Slower access speed; not directly accessible
accessible by the CPU. by the CPU.

Volatile in nature; data is lost when power Non-volatile; retains data even when power is
is turned off. off.

More expensive due to the use of Less expensive, often using magnetic or
semiconductor technology. optical technology.

Capacity ranges from 16 to 32 GB, Capacity can range from 200 GB to several
suitable for active tasks. terabytes for extensive storage.

Examples include RAM, ROM, and Examples include Hard Disk Drives, Floppy
Cache memory. Disks, and Magnetic Tapes.
35. Cache

● Definition: Small, fast memory close to the CPU for quick access to frequently used data.
● Caching: Caching involves using a smaller, faster memory to store copies of data from
frequently used main memory locations. Various independent caches within a
CPU store instructions and data, reducing the average time needed to access
data from the main memory.

36. Direct Mapping vs. Associative Mapping

● Direct Mapping: Each block has a fixed cache location.


● Associative Mapping: Any block can go into any cache line, more flexible.

37. Fragmentation

● Internal Fragmentation: Fragmentation occurs when free memory space is too small to
allocate
to processes, leaving unusable memory blocks. It happens during dynamic memory
allocation
when small free blocks cannot satisfy any request.
● Internal Fragmentation: Wasted space within allocated memory.
● External Fragmentation: Wasted space outside allocated memory.

Internal Fragmentation External Fragmentation

Fixed-sized memory blocks are allocated to Variable-sized memory blocks are


processes. allocated to processes.

Occurs when allocated memory blocks are Occurs when free memory is scattered in
larger than required by the process. small, unusable fragments.

Solution: Best-fit block allocation. Solution: Compaction, paging, and


segmentation.
Arises when memory is divided into Arises when memory is divided into
fixed-sized partitions. variable-sized partitions.

Difference between allocated and required Unused spaces between allocated blocks
memory is wasted. are too small for new processes.

38. Defragmentation

● Definition: Process of rearranging memory to reduce fragmentation.


● Compaction: The process of collecting fragments of available memory space into
contiguous blocks by moving programs and data in a computers memory or disk.

39. Spooling

● Definition: Storing data temporarily for devices to access when ready, like print jobs.
● Spooling stands for Simultaneous Peripheral Operations Online, which involves
placing jobs in a buffer, either in memory or on a disk, where a device can access
them when ready. It helps manage different data access rates of devices

40. Overlays

● Definition: Overlays involve loading only the required part of a program into memory.
unloading it when done, and loading a new part as needed. This technique efficiently
manages memory usage.

41. Page Table, Frames, Pages

● Page Table: Maps logical pages to physical frames.


● Frame: Fixed-size physical memory block.
● Page: Fixed-size block of logical memory.

42. Paging

● Definition: Paging is a memory management technique for non-contiguous memory


allocation,
dividing both main and secondary memory into fixed size partitions called pages and
frames,
fetching process pages into main memory frames as needed.
● Purpose: Avoids external fragmentation and simplifies memory management.

43. Segmentation

● Definition: Dividing memory into segments based on logical units.

44. Paging vs. Segmentation

Paging Segmentation

Invisible to the programmer. Visible to the programmer.

Fixed-size pages. Variable-size segments.

Procedures and data cannot be separated. Procedures and data can be separated.

Allows virtual address space to exceed Breaks programs, data, and code into
physical memory. independent spaces.

Mostly found on CPUs and MMU chips. Commonly found on Windows servers,
limited in Linux.

Faster memory access than segmentation. Slower memory access compared to paging.

Results in internal fragmentation. Results in external fragmentation.

45. Page Faults

● Definition: Occurs when a program accesses a page not in physical memory.

46. What is virtual memory?


- A memory management technique in operating systems.
- Creates the illusion of a large contiguous address space.
- Extends physical memory using disk space.
- Allows more programs to run simultaneously.
- Stores data in pages for efficient memory use.
- Provides memory protection.
- Managed through methods like paging and segmentation.
- Acts as temporary storage alongside RAM for processes.

47. Objective of Multiprogramming?


- Multiprogramming allows multiple programs to run on a single processor.
- It addresses underutilization of the CPU and main memory.
- The technique coordinates the execution of several programs simultaneously.
- The main objective is to have processes running at all times.
- It improves CPU utilization by organizing multiple jobs for continuous
execution.

48. Demand Paging

● Definition: Demand paging loads pages into memory only when they are needed,
which occurs when a page fault happens.

49. Page Replacement Algorithms


- manage how pages are swapped in and out of physical memory when a page
fault occurs.

1. Least Recently Used (LRU):


- Replaces the page that has not been used for the longest time.
- Keeps track of page usage over time.

2. First-In, First-Out (FIFO):


- Replaces the oldest page in memory.
- Simple to implement but can lead to suboptimal performance.

3. Optimal Page Replacement:


- Replaces the page that will not be used for the longest period in the
future.
- Provides the best performance but is impractical to implement since future
requests are unknown.

4. Least Frequently Used (LFU):


- Replaces the page that has the lowest access frequency.
- Pages are tracked based on the number of accesses over time.

These algorithms play a critical role in optimizing memory management and ensuring efficient
system performance.
50. Thrashing

● Definition: Excessive swapping between memory and disk, slowing the system.
● Thrashing occurs when a computer spends more time handling page faults
than executing transactions, degrading performance. It happens when the
page fault rate increases, leading to longer service times and reduced efficiency.

Operating System Concepts - Interview


Guide
1. Operating System (OS) & Types

● Definition: Software that manages computer hardware and software resources and
provides common services for applications.
● Types:
○ Batch OS: Processes batches of jobs with no interaction.
○ Time-Sharing OS: Allows multiple users to interact with programs simultaneously.
○ Distributed OS: Manages a group of separate computers and makes them act as
one system.

2. Purpose of an OS

● Resource Management: Allocates CPU, memory, and I/O devices to processes.


● Task Management: Manages tasks like process scheduling, multitasking, and resource
sharing.
● User Interface: Provides a user-friendly way to interact with the system (GUI or
command-line).

3. Real-Time Operating System (RTOS) & Types

● Definition: An OS designed for real-time applications where responses are needed within
a specific time.

4. Program, Process and Thread


● A process is essentially a program that is currently under execution.
● The main function of an Operating System (OS) is to manage and handle all processes.

When a program is loaded into memory and becomes a process, it can be divided into four
sections:

● Stack: Stores temporary data such as function parameters, return addresses, and local
variables.
● Heap: Dynamically allocated memory during the execution of the program.
● Text: Contains the compiled code of the program.
● Data: Holds global and static variables.

There are two types of processes: Operating System Processes and User Processes

Program Process

Contains a set of instructions designed Instance of an executing program.


to complete a specific task.

Passive entity that resides in secondary Active entity created during execution, loaded
memory. into main memory.

Exists in a single place until deleted. Exists for a limited time, terminating after task
completion.

Static entity with no resource Dynamic entity with high resource


requirement. requirements, needing CPU, memory, and I/O.

● Process Table: The operating system manages processes, allocating processor time
and resources like memory and disks. The process table is maintained by the OS
to keep track of all processes, including their states and resources they are using.
● Thread: A thread is a single sequence of execution within a process, often
called a lightweight process. Threads improve application performance through
parallelism. For example, a web browser can use multiple threads for different tabs.
Process Thread

A process is a program in A thread is a smaller unit within a process.


execution.

Processes are isolated from each Threads share memory within the same process.
other.

Process communication is less Thread communication is more efficient.


efficient.

Processes are considered Threads are considered lightweight.


heavyweight.

Process switching requires OS Thread switching does not require OS intervention.


intervention.

One blocked process doesn't affect A blocked thread can affect other threads in the
others. same process.

5. PCB, Socket, Shell, Kernel, and Monolithic Kernel

● Process Control Block (PCB) tracks the execution status of a process,


containing information like registers, priority, and execution state. The process
table is an array of PCBs, each representing a process in the system.
● Socket: An endpoint for sending/receiving data over a network.
● Shell: Interface to access OS services, either via command-line or GUI.
● Kernel: The kernel is the core component of an operating system, managing
memory, CPU time, and hardware operations. It acts as a bridge
between applications and hardware-level data processing.

Kernel Operating System (OS)


Central component of the OS. System software that manages system resources.

Converts user commands into Manages overall system resources and


machine-level instructions. operations.

Acts as an interface between Acts as an interface between hardware and the


hardware and applications. user.

Handles process, file, device Provides data security, user access control, and
management, and I/O. privacy.

Types include Microkernel, Monolithic Types include Single and Multiprogramming,


kernel, etc. Distributed OS, Real-time OS.

● Monolithic Kernel: A monolithic kernel manages system resources and implements


user and kernel services in the same address space, making the OS execution faster
but increasing its size. It handles CPU scheduling, memory management, file
management,
and other functions through system calls.

Microkernel Monolithic Kernel

Kernel and user services are in separate Kernel and user services are in the same
address spaces. address space.

Smaller in size compared to a monolithic Larger in size than a microkernel.


kernel.

More easily extendible. Harder to extend.


Service crashes do not affect the A service crash can cause the entire
microkernel's operation. system to crash.

Uses message queues for inter-process Uses signals and sockets for inter-process
communication (IPC). communication.

6. Multitasking vs. Multithreading

Multi-threading Multi-tasking

Multiple threads are executed at the same Several programs (or tasks) are executed
time within the same or different parts of a concurrently.
program.

The CPU switches between multiple The CPU switches between multiple tasks
threads. or processes.

It is a lightweight process, involving parts of It is a heavyweight process, involving


a single process. multiple processes.

Multi-threading is a feature of the process, Multitasking is a feature of the OS,


allowing it to split into threads. enabling it to handle multiple applications.

Involves sharing computing resources Involves sharing computing resources


among threads of a single process. (CPU, memory, devices) among multiple
processes.

7. Multitasking vs. Multiprocessing

Multitasking Multiprocessing
Performs multiple tasks using a single Performs multiple tasks using multiple
processor. processors.

Has only one CPU. Has more than one CPU.

More economical. Less economical.

Less efficient than multiprocessing. More efficient than multitasking.

Allows fast switching between tasks. Allows smooth simultaneous task


processing.

Requires more time to execute tasks. Requires less time for task processing.

8. Process States and Queues

● Process States:
Different states that a process goes through include:
○ New State: The process is just created.
○ Running: The CPU is actively executing the process's instructions.
○ Waiting: The process is paused, waiting for an event to occur.
○ Ready: The process has all necessary resources and is waiting for CPU
assignment.
○ Terminate: The process has completed execution and is finished.
● Process Queues:
○ Ready Queue: Holds processes that are ready for CPU time.
○ Waiting Queue: Holds processes that are waiting for I/O operations.

9. Inter-Process Communication (IPC)

● Purpose: Allows processes to communicate and share data.


● Techniques: Includes pipes, message queues, shared memory, and semaphores.

10. Dynamic Binding


● Definition: Linking a function or variable at runtime rather than at compile-time.
● Advantage: Flexibility in program behavior and memory use.

11. Swapping

● Definition: Moving processes between main memory and disk storage.


● Purpose: Frees up memory for active processes, improving system performance.

12. Context Switching

● Definition: Context switching involves saving the state of a currently running process and
loading the saved state of a new process. The process state is stored in the
Process Control Block, allowing the old process to resume from where it left off.
● Overhead: Increases CPU load but allows multitasking.

13. Zombie Process & Orphan Process

● Zombie Process: A terminated process still occupying memory until the parent
acknowledges it.
● Orphan Process: A child process without a parent, often adopted by the init system in
Unix-based OS.

14. RAID (Redundant Array of Independent Disks)

● Definition: A method of storing data across multiple disks for redundancy or performance.
● Types: RAID 0 (striping), RAID 1 (mirroring), RAID 5 (striping with parity), etc.

15. Starvation and Aging

● Starvation: when a process does not get the resources it needs for a long
time because other processes are prioritized.
● Aging: Gradually increases priority of waiting processes to prevent starvation.

16. Scheduling Algorithms

● Purpose: Determines the order in which processes access the CPU.


● Types: Includes FCFS, Round Robin, Priority Scheduling, etc.

17. Preemptive vs. Non-Preemptive Scheduling


● Preemptive: OS can interrupt and reassign CPU from a running process.
● Non-Preemptive: Once a process starts, it runs until completion or voluntary release of
CPU.

18. FCFS & Convoy Effect

● FCFS (First-Come, First-Serve): FCFS schedules jobs in the order they arrive in the
ready queue. It is non-preemptive, meaning a process holds the CPU until it terminates
or performs I/O, causing longer jobs to delay shorter ones.
● Convoy Effect: Occurs in FCFS when a long process delays others behind it.

19. Round Robin Scheduling

● Definition: Schedules processes in a time slice or quantum, rotating through processes to


ensure fair allocation of CPU time and preventing starvation. It is cyclic and does not
prioritize any process.
● Advantage: Fair and efficient for time-sharing systems.

20. Priority Scheduling

● Definition: Processes assigned CPU based on priority levels.


● Challenge: Risk of starvation for lower-priority processes.

21. Concurrency

● Definition: Multiple processes appear to run simultaneously.


● Achieved By: Multithreading or multitasking within a single CPU.

22. Race Condition

● Definition: Two processes access shared data simultaneously, leading to unexpected


results.
● Solution: Use locks or synchronization mechanisms.

23. Critical Section

● Definition: A part of code that accesses shared resources and must not be executed by
more than one process at a time.
24. Synchronization techniques?
- Mutexes
- Condition variables
- Semaphores
- File locks

● Mutex: Only allows one process at a time, preventing concurrent access.


● Conditional Variable: A variable used to control access in multithreading, allowing threads
to wait until certain conditions are met.
● Semaphore: Allows multiple processes to access resources up to a limit.

25. Semaphore in OS

A Semaphore is a synchronization tool used in operating systems to manage access to shared


resources in multi-threaded or multi-process systems. It keeps a count of available resources
and uses two atomic operations, wait() and signal(), to control access.

Types of Semaphores:

1. Binary Semaphore:
○ Has values 0 or 1.
○ Signals availability of a single resource.
2. Counting Semaphore:
○ Can have values greater than 1.
○ Controls access to multiple instances of a resource, like a pool of connections.

Binary Semaphore vs Mutex:

Binary Semaphore Mutex

Signals availability of a shared resource (0 Allows mutual exclusion with a single lock.
or 1).

Uses signaling mechanisms. Uses a locking mechanism.


Faster in some cases with multiple Slower when frequently contended.
processes.

Integer variable holding 0 or 1. Object holding lock state and lock owner
info.

26. Binary vs. Counting Semaphores

● Binary Semaphore: Only two values (0 or 1), similar to a lock.


● Counting Semaphore: Allows more flexibility with a range of values for managing multiple
resources.

27. Producer-Consumer Problem

● Definition: Synchronization issue where producer and consumer processes access


shared data.
● Solution: Use semaphores or mutexes to control access.

28. Belady’s Anomaly

● Definition: An increase in page faults despite increasing memory pages in certain


algorithms.
● Occurs In: FIFO page replacement algorithm.

29. What is a Deadlock in OS?

● A deadlock is a situation in which a set of processes are blocked because each process
holds resources and waits to acquire additional resources held by another process.
● In this scenario, two or more processes are unable to proceed because they are waiting
for each other to release resources.
● Deadlocks commonly occur in multiprocessing environments and can result in the system
becoming unresponsive.

Necessary Conditions for Deadlock

1. Mutual Exclusion: Resources cannot be shared; at least one resource must be held in a
non-shareable mode.
2. Hold and Wait: Processes holding resources are allowed to wait for additional resources.
3. No Pre-emption: Resources cannot be forcibly taken from a process; they must be
voluntarily released.
4. Circular Wait: There exists a set of processes such that each process is waiting for a
resource held by the next process in the cycle.

30. Banker’s Algorithm

● Purpose: Deadlock avoidance algorithm in resource allocation.


● Method: Checks if resources can be safely allocated without causing deadlock.

31. Methods for Handling Deadlock

1. Deadlock Prevention:
○ Ensure that at least one necessary condition for deadlock cannot hold.
○ Allow resource sharing (Mutual Exclusion).
○ Require all resources to be requested upfront (Hold and Wait).
○ Permit resource preemption (No Pre-emption).
○ Impose a strict order for resource allocation (Circular Wait).
2. Deadlock Avoidance:
○ Dynamically examine resource allocation to prevent circular wait.
○ Use the Banker’s Algorithm to determine safe states; deny requests that would
lead to an unsafe state.
3. Deadlock Detection:
○ Allow the system to enter a deadlock state, then detect it.
○ Use a Wait-for Graph to represent wait-for relationships; a cycle indicates a
deadlock.
○ Employ a Resource Allocation Graph to check for cycles and determine the
presence of deadlock.
4. Deadlock Recovery:
○ Terminate one or more processes involved in the deadlock (abruptly or
gracefully).
○ Use resource preemption to take resources from processes and allocate them to
others to break the deadlock.
32. Logical vs. Physical Address Space

Parameter Logical Address Physical Address

Basic Generated by the CPU. Located in a memory unit.

Address Set of all logical addresses Set of all physical addresses


Space generated by the CPU. corresponding to logical addresses.

Visibility Visible to the user. Not visible to the user.

Generation Created by the CPU. Computed by the Memory Management


Unit (MMU).

33. Memory Management Unit (MMU)

● Definition: Hardware that translates logical to physical addresses.

34. Main vs. Secondary Memory

Here are six important differences between primary memory and secondary memory, presented
succinctly:

Primary Memory Secondary Memory

Used for temporary data storage while Used for permanent data storage, retaining
the computer is running. information long-term.

Faster access speed as it is directly Slower access speed; not directly accessible
accessible by the CPU. by the CPU.

Volatile in nature; data is lost when power Non-volatile; retains data even when power is
is turned off. off.
More expensive due to the use of Less expensive, often using magnetic or
semiconductor technology. optical technology.

Capacity ranges from 16 to 32 GB, Capacity can range from 200 GB to several
suitable for active tasks. terabytes for extensive storage.

Examples include RAM, ROM, and Examples include Hard Disk Drives, Floppy
Cache memory. Disks, and Magnetic Tapes.

35. Cache

● Definition: Small, fast memory close to the CPU for quick access to frequently used data.
● Caching: Caching involves using a smaller, faster memory to store copies of data from
frequently used main memory locations. Various independent caches within a
CPU store instructions and data, reducing the average time needed to access
data from the main memory.

36. Direct Mapping vs. Associative Mapping

● Direct Mapping: Each block has a fixed cache location.


● Associative Mapping: Any block can go into any cache line, more flexible.

37. Fragmentation

● Internal Fragmentation: Fragmentation occurs when free memory space is too small to
allocate
to processes, leaving unusable memory blocks. It happens during dynamic memory
allocation
when small free blocks cannot satisfy any request.
● Internal Fragmentation: Wasted space within allocated memory.
● External Fragmentation: Wasted space outside allocated memory.

Internal Fragmentation External Fragmentation


Fixed-sized memory blocks are allocated to Variable-sized memory blocks are
processes. allocated to processes.

Occurs when allocated memory blocks are Occurs when free memory is scattered in
larger than required by the process. small, unusable fragments.

Solution: Best-fit block allocation. Solution: Compaction, paging, and


segmentation.

Arises when memory is divided into Arises when memory is divided into
fixed-sized partitions. variable-sized partitions.

Difference between allocated and required Unused spaces between allocated blocks
memory is wasted. are too small for new processes.

38. Defragmentation

● Definition: Process of rearranging memory to reduce fragmentation.


● Compaction: The process of collecting fragments of available memory space into
contiguous blocks by moving programs and data in a computers memory or disk.

39. Spooling

● Definition: Storing data temporarily for devices to access when ready, like print jobs.
● Spooling stands for Simultaneous Peripheral Operations Online, which involves
placing jobs in a buffer, either in memory or on a disk, where a device can access
them when ready. It helps manage different data access rates of devices

40. Overlays

● Definition: Overlays involve loading only the required part of a program into memory.
unloading it when done, and loading a new part as needed. This technique efficiently
manages memory usage.

41. Page Table, Frames, Pages


● Page Table: Maps logical pages to physical frames.
● Frame: Fixed-size physical memory block.
● Page: Fixed-size block of logical memory.

42. Paging

● Definition: Paging is a memory management technique for non-contiguous memory


allocation,
dividing both main and secondary memory into fixed size partitions called pages and
frames,
fetching process pages into main memory frames as needed.
● Purpose: Avoids external fragmentation and simplifies memory management.

43. Segmentation

● Definition: Dividing memory into segments based on logical units.

44. Paging vs. Segmentation

Paging Segmentation

Invisible to the programmer. Visible to the programmer.

Fixed-size pages. Variable-size segments.

Procedures and data cannot be separated. Procedures and data can be separated.

Allows virtual address space to exceed Breaks programs, data, and code into
physical memory. independent spaces.

Mostly found on CPUs and MMU chips. Commonly found on Windows servers,
limited in Linux.

Faster memory access than segmentation. Slower memory access compared to paging.
Results in internal fragmentation. Results in external fragmentation.

45. Page Faults

● Definition: Occurs when a program accesses a page not in physical memory.

46. What is virtual memory?


- A memory management technique in operating systems.
- Creates the illusion of a large contiguous address space.
- Extends physical memory using disk space.
- Allows more programs to run simultaneously.
- Stores data in pages for efficient memory use.
- Provides memory protection.
- Managed through methods like paging and segmentation.
- Acts as temporary storage alongside RAM for processes.

47. Objective of Multiprogramming?


- Multiprogramming allows multiple programs to run on a single processor.
- It addresses underutilization of the CPU and main memory.
- The technique coordinates the execution of several programs simultaneously.
- The main objective is to have processes running at all times.
- It improves CPU utilization by organizing multiple jobs for continuous
execution.

48. Demand Paging

● Definition: Demand paging loads pages into memory only when they are needed,
which occurs when a page fault happens.

49. Page Replacement Algorithms


- manage how pages are swapped in and out of physical memory when a page
fault occurs.

1. Least Recently Used (LRU):


- Replaces the page that has not been used for the longest time.
- Keeps track of page usage over time.

2. First-In, First-Out (FIFO):


- Replaces the oldest page in memory.
- Simple to implement but can lead to suboptimal performance.

3. Optimal Page Replacement:


- Replaces the page that will not be used for the longest period in the
future.
- Provides the best performance but is impractical to implement since future
requests are unknown.

4. Least Frequently Used (LFU):


- Replaces the page that has the lowest access frequency.
- Pages are tracked based on the number of accesses over time.

These algorithms play a critical role in optimizing memory management and ensuring efficient
system performance.

50. Thrashing

● Definition: Excessive swapping between memory and disk, slowing the system.
● Thrashing occurs when a computer spends more time handling page faults
than executing transactions, degrading performance. It happens when the
page fault rate increases, leading to longer service times and reduced efficiency.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy