Os Last Min Notes Operating System
Os Last Min Notes Operating System
Guide
1. Operating System (OS) & Types
● Definition: Software that manages computer hardware and software resources and
provides common services for applications.
● Types:
○ Batch OS: Processes batches of jobs with no interaction.
○ Time-Sharing OS: Allows multiple users to interact with programs simultaneously.
○ Distributed OS: Manages a group of separate computers and makes them act as
one system.
2. Purpose of an OS
● Definition: An OS designed for real-time applications where responses are needed within
a specific time.
When a program is loaded into memory and becomes a process, it can be divided into four
sections:
● Stack: Stores temporary data such as function parameters, return addresses, and local
variables.
● Heap: Dynamically allocated memory during the execution of the program.
● Text: Contains the compiled code of the program.
● Data: Holds global and static variables.
There are two types of processes: Operating System Processes and User Processes
Program Process
Passive entity that resides in secondary Active entity created during execution, loaded
memory. into main memory.
Exists in a single place until deleted. Exists for a limited time, terminating after task
completion.
● Process Table: The operating system manages processes, allocating processor time
and resources like memory and disks. The process table is maintained by the OS
to keep track of all processes, including their states and resources they are using.
● Thread: A thread is a single sequence of execution within a process, often
called a lightweight process. Threads improve application performance through
parallelism. For example, a web browser can use multiple threads for different tabs.
Process Thread
One blocked process doesn't affect A blocked thread can affect other threads in the
others. same process.
Central component of the OS. System software that manages system resources.
Handles process, file, device Provides data security, user access control, and
management, and I/O. privacy.
Kernel and user services are in separate Kernel and user services are in the same
address spaces. address space.
Service crashes do not affect the A service crash can cause the entire
microkernel's operation. system to crash.
Uses message queues for inter-process Uses signals and sockets for inter-process
communication (IPC). communication.
Multiple threads are executed at the same Several programs (or tasks) are executed
time within the same or different parts of a concurrently.
program.
The CPU switches between multiple The CPU switches between multiple tasks
threads. or processes.
Multitasking Multiprocessing
Performs multiple tasks using a single Performs multiple tasks using multiple
processor. processors.
Requires more time to execute tasks. Requires less time for task processing.
● Process States:
Different states that a process goes through include:
○ New State: The process is just created.
○ Running: The CPU is actively executing the process's instructions.
○ Waiting: The process is paused, waiting for an event to occur.
○ Ready: The process has all necessary resources and is waiting for CPU
assignment.
○ Terminate: The process has completed execution and is finished.
● Process Queues:
○ Ready Queue: Holds processes that are ready for CPU time.
○ Waiting Queue: Holds processes that are waiting for I/O operations.
11. Swapping
● Zombie Process: A terminated process still occupying memory until the parent
acknowledges it.
● Orphan Process: A child process without a parent, often adopted by the init system in
Unix-based OS.
● Definition: A method of storing data across multiple disks for redundancy or performance.
● Types: RAID 0 (striping), RAID 1 (mirroring), RAID 5 (striping with parity), etc.
● Starvation: when a process does not get the resources it needs for a long
time because other processes are prioritized.
● Aging: Gradually increases priority of waiting processes to prevent starvation.
● FCFS (First-Come, First-Serve): FCFS schedules jobs in the order they arrive in the
ready queue. It is non-preemptive, meaning a process holds the CPU until it terminates
or performs I/O, causing longer jobs to delay shorter ones.
● Convoy Effect: Occurs in FCFS when a long process delays others behind it.
21. Concurrency
● Definition: A part of code that accesses shared resources and must not be executed by
more than one process at a time.
25. Semaphore in OS
Types of Semaphores:
1. Binary Semaphore:
○ Has values 0 or 1.
○ Signals availability of a single resource.
2. Counting Semaphore:
○ Can have values greater than 1.
○ Controls access to multiple instances of a resource, like a pool of connections.
Signals availability of a shared resource (0 Allows mutual exclusion with a single lock.
or 1).
Integer variable holding 0 or 1. Object holding lock state and lock owner
info.
● A deadlock is a situation in which a set of processes are blocked because each process
holds resources and waits to acquire additional resources held by another process.
● In this scenario, two or more processes are unable to proceed because they are waiting
for each other to release resources.
● Deadlocks commonly occur in multiprocessing environments and can result in the system
becoming unresponsive.
1. Mutual Exclusion: Resources cannot be shared; at least one resource must be held in a
non-shareable mode.
2. Hold and Wait: Processes holding resources are allowed to wait for additional resources.
3. No Pre-emption: Resources cannot be forcibly taken from a process; they must be
voluntarily released.
4. Circular Wait: There exists a set of processes such that each process is waiting for a
resource held by the next process in the cycle.
1. Deadlock Prevention:
○ Ensure that at least one necessary condition for deadlock cannot hold.
○ Allow resource sharing (Mutual Exclusion).
○ Require all resources to be requested upfront (Hold and Wait).
○ Permit resource preemption (No Pre-emption).
○ Impose a strict order for resource allocation (Circular Wait).
2. Deadlock Avoidance:
○ Dynamically examine resource allocation to prevent circular wait.
○ Use the Banker’s Algorithm to determine safe states; deny requests that would
lead to an unsafe state.
3. Deadlock Detection:
○ Allow the system to enter a deadlock state, then detect it.
○ Use a Wait-for Graph to represent wait-for relationships; a cycle indicates a
deadlock.
○ Employ a Resource Allocation Graph to check for cycles and determine the
presence of deadlock.
4. Deadlock Recovery:
○ Terminate one or more processes involved in the deadlock (abruptly or
gracefully).
○ Use resource preemption to take resources from processes and allocate them to
others to break the deadlock.
Here are six important differences between primary memory and secondary memory, presented
succinctly:
Used for temporary data storage while Used for permanent data storage, retaining
the computer is running. information long-term.
Faster access speed as it is directly Slower access speed; not directly accessible
accessible by the CPU. by the CPU.
Volatile in nature; data is lost when power Non-volatile; retains data even when power is
is turned off. off.
More expensive due to the use of Less expensive, often using magnetic or
semiconductor technology. optical technology.
Capacity ranges from 16 to 32 GB, Capacity can range from 200 GB to several
suitable for active tasks. terabytes for extensive storage.
Examples include RAM, ROM, and Examples include Hard Disk Drives, Floppy
Cache memory. Disks, and Magnetic Tapes.
35. Cache
● Definition: Small, fast memory close to the CPU for quick access to frequently used data.
● Caching: Caching involves using a smaller, faster memory to store copies of data from
frequently used main memory locations. Various independent caches within a
CPU store instructions and data, reducing the average time needed to access
data from the main memory.
37. Fragmentation
● Internal Fragmentation: Fragmentation occurs when free memory space is too small to
allocate
to processes, leaving unusable memory blocks. It happens during dynamic memory
allocation
when small free blocks cannot satisfy any request.
● Internal Fragmentation: Wasted space within allocated memory.
● External Fragmentation: Wasted space outside allocated memory.
Occurs when allocated memory blocks are Occurs when free memory is scattered in
larger than required by the process. small, unusable fragments.
Difference between allocated and required Unused spaces between allocated blocks
memory is wasted. are too small for new processes.
38. Defragmentation
39. Spooling
● Definition: Storing data temporarily for devices to access when ready, like print jobs.
● Spooling stands for Simultaneous Peripheral Operations Online, which involves
placing jobs in a buffer, either in memory or on a disk, where a device can access
them when ready. It helps manage different data access rates of devices
40. Overlays
● Definition: Overlays involve loading only the required part of a program into memory.
unloading it when done, and loading a new part as needed. This technique efficiently
manages memory usage.
42. Paging
43. Segmentation
Paging Segmentation
Procedures and data cannot be separated. Procedures and data can be separated.
Allows virtual address space to exceed Breaks programs, data, and code into
physical memory. independent spaces.
Mostly found on CPUs and MMU chips. Commonly found on Windows servers,
limited in Linux.
Faster memory access than segmentation. Slower memory access compared to paging.
● Definition: Demand paging loads pages into memory only when they are needed,
which occurs when a page fault happens.
These algorithms play a critical role in optimizing memory management and ensuring efficient
system performance.
50. Thrashing
● Definition: Excessive swapping between memory and disk, slowing the system.
● Thrashing occurs when a computer spends more time handling page faults
than executing transactions, degrading performance. It happens when the
page fault rate increases, leading to longer service times and reduced efficiency.
● Definition: Software that manages computer hardware and software resources and
provides common services for applications.
● Types:
○ Batch OS: Processes batches of jobs with no interaction.
○ Time-Sharing OS: Allows multiple users to interact with programs simultaneously.
○ Distributed OS: Manages a group of separate computers and makes them act as
one system.
2. Purpose of an OS
● Definition: An OS designed for real-time applications where responses are needed within
a specific time.
When a program is loaded into memory and becomes a process, it can be divided into four
sections:
● Stack: Stores temporary data such as function parameters, return addresses, and local
variables.
● Heap: Dynamically allocated memory during the execution of the program.
● Text: Contains the compiled code of the program.
● Data: Holds global and static variables.
There are two types of processes: Operating System Processes and User Processes
Program Process
Passive entity that resides in secondary Active entity created during execution, loaded
memory. into main memory.
Exists in a single place until deleted. Exists for a limited time, terminating after task
completion.
● Process Table: The operating system manages processes, allocating processor time
and resources like memory and disks. The process table is maintained by the OS
to keep track of all processes, including their states and resources they are using.
● Thread: A thread is a single sequence of execution within a process, often
called a lightweight process. Threads improve application performance through
parallelism. For example, a web browser can use multiple threads for different tabs.
Process Thread
Processes are isolated from each Threads share memory within the same process.
other.
One blocked process doesn't affect A blocked thread can affect other threads in the
others. same process.
Handles process, file, device Provides data security, user access control, and
management, and I/O. privacy.
Kernel and user services are in separate Kernel and user services are in the same
address spaces. address space.
Uses message queues for inter-process Uses signals and sockets for inter-process
communication (IPC). communication.
Multi-threading Multi-tasking
Multiple threads are executed at the same Several programs (or tasks) are executed
time within the same or different parts of a concurrently.
program.
The CPU switches between multiple The CPU switches between multiple tasks
threads. or processes.
Multitasking Multiprocessing
Performs multiple tasks using a single Performs multiple tasks using multiple
processor. processors.
Requires more time to execute tasks. Requires less time for task processing.
● Process States:
Different states that a process goes through include:
○ New State: The process is just created.
○ Running: The CPU is actively executing the process's instructions.
○ Waiting: The process is paused, waiting for an event to occur.
○ Ready: The process has all necessary resources and is waiting for CPU
assignment.
○ Terminate: The process has completed execution and is finished.
● Process Queues:
○ Ready Queue: Holds processes that are ready for CPU time.
○ Waiting Queue: Holds processes that are waiting for I/O operations.
11. Swapping
● Definition: Context switching involves saving the state of a currently running process and
loading the saved state of a new process. The process state is stored in the
Process Control Block, allowing the old process to resume from where it left off.
● Overhead: Increases CPU load but allows multitasking.
● Zombie Process: A terminated process still occupying memory until the parent
acknowledges it.
● Orphan Process: A child process without a parent, often adopted by the init system in
Unix-based OS.
● Definition: A method of storing data across multiple disks for redundancy or performance.
● Types: RAID 0 (striping), RAID 1 (mirroring), RAID 5 (striping with parity), etc.
● Starvation: when a process does not get the resources it needs for a long
time because other processes are prioritized.
● Aging: Gradually increases priority of waiting processes to prevent starvation.
● FCFS (First-Come, First-Serve): FCFS schedules jobs in the order they arrive in the
ready queue. It is non-preemptive, meaning a process holds the CPU until it terminates
or performs I/O, causing longer jobs to delay shorter ones.
● Convoy Effect: Occurs in FCFS when a long process delays others behind it.
21. Concurrency
● Definition: A part of code that accesses shared resources and must not be executed by
more than one process at a time.
24. Synchronization techniques?
- Mutexes
- Condition variables
- Semaphores
- File locks
25. Semaphore in OS
Types of Semaphores:
1. Binary Semaphore:
○ Has values 0 or 1.
○ Signals availability of a single resource.
2. Counting Semaphore:
○ Can have values greater than 1.
○ Controls access to multiple instances of a resource, like a pool of connections.
Signals availability of a shared resource (0 Allows mutual exclusion with a single lock.
or 1).
Integer variable holding 0 or 1. Object holding lock state and lock owner
info.
● A deadlock is a situation in which a set of processes are blocked because each process
holds resources and waits to acquire additional resources held by another process.
● In this scenario, two or more processes are unable to proceed because they are waiting
for each other to release resources.
● Deadlocks commonly occur in multiprocessing environments and can result in the system
becoming unresponsive.
1. Mutual Exclusion: Resources cannot be shared; at least one resource must be held in a
non-shareable mode.
2. Hold and Wait: Processes holding resources are allowed to wait for additional resources.
3. No Pre-emption: Resources cannot be forcibly taken from a process; they must be
voluntarily released.
4. Circular Wait: There exists a set of processes such that each process is waiting for a
resource held by the next process in the cycle.
1. Deadlock Prevention:
○ Ensure that at least one necessary condition for deadlock cannot hold.
○ Allow resource sharing (Mutual Exclusion).
○ Require all resources to be requested upfront (Hold and Wait).
○ Permit resource preemption (No Pre-emption).
○ Impose a strict order for resource allocation (Circular Wait).
2. Deadlock Avoidance:
○ Dynamically examine resource allocation to prevent circular wait.
○ Use the Banker’s Algorithm to determine safe states; deny requests that would
lead to an unsafe state.
3. Deadlock Detection:
○ Allow the system to enter a deadlock state, then detect it.
○ Use a Wait-for Graph to represent wait-for relationships; a cycle indicates a
deadlock.
○ Employ a Resource Allocation Graph to check for cycles and determine the
presence of deadlock.
4. Deadlock Recovery:
○ Terminate one or more processes involved in the deadlock (abruptly or
gracefully).
○ Use resource preemption to take resources from processes and allocate them to
others to break the deadlock.
32. Logical vs. Physical Address Space
Here are six important differences between primary memory and secondary memory, presented
succinctly:
Used for temporary data storage while Used for permanent data storage, retaining
the computer is running. information long-term.
Faster access speed as it is directly Slower access speed; not directly accessible
accessible by the CPU. by the CPU.
Volatile in nature; data is lost when power Non-volatile; retains data even when power is
is turned off. off.
More expensive due to the use of Less expensive, often using magnetic or
semiconductor technology. optical technology.
Capacity ranges from 16 to 32 GB, Capacity can range from 200 GB to several
suitable for active tasks. terabytes for extensive storage.
Examples include RAM, ROM, and Examples include Hard Disk Drives, Floppy
Cache memory. Disks, and Magnetic Tapes.
35. Cache
● Definition: Small, fast memory close to the CPU for quick access to frequently used data.
● Caching: Caching involves using a smaller, faster memory to store copies of data from
frequently used main memory locations. Various independent caches within a
CPU store instructions and data, reducing the average time needed to access
data from the main memory.
37. Fragmentation
● Internal Fragmentation: Fragmentation occurs when free memory space is too small to
allocate
to processes, leaving unusable memory blocks. It happens during dynamic memory
allocation
when small free blocks cannot satisfy any request.
● Internal Fragmentation: Wasted space within allocated memory.
● External Fragmentation: Wasted space outside allocated memory.
Occurs when allocated memory blocks are Occurs when free memory is scattered in
larger than required by the process. small, unusable fragments.
Arises when memory is divided into Arises when memory is divided into
fixed-sized partitions. variable-sized partitions.
Difference between allocated and required Unused spaces between allocated blocks
memory is wasted. are too small for new processes.
38. Defragmentation
39. Spooling
● Definition: Storing data temporarily for devices to access when ready, like print jobs.
● Spooling stands for Simultaneous Peripheral Operations Online, which involves
placing jobs in a buffer, either in memory or on a disk, where a device can access
them when ready. It helps manage different data access rates of devices
40. Overlays
● Definition: Overlays involve loading only the required part of a program into memory.
unloading it when done, and loading a new part as needed. This technique efficiently
manages memory usage.
42. Paging
43. Segmentation
Paging Segmentation
Procedures and data cannot be separated. Procedures and data can be separated.
Allows virtual address space to exceed Breaks programs, data, and code into
physical memory. independent spaces.
Mostly found on CPUs and MMU chips. Commonly found on Windows servers,
limited in Linux.
Faster memory access than segmentation. Slower memory access compared to paging.
Results in internal fragmentation. Results in external fragmentation.
● Definition: Demand paging loads pages into memory only when they are needed,
which occurs when a page fault happens.
These algorithms play a critical role in optimizing memory management and ensuring efficient
system performance.
50. Thrashing
● Definition: Excessive swapping between memory and disk, slowing the system.
● Thrashing occurs when a computer spends more time handling page faults
than executing transactions, degrading performance. It happens when the
page fault rate increases, leading to longer service times and reduced efficiency.