Functions of an Operating System
Functions of an Operating System
An Operating System (OS) is essential for managing computer hardware and software resources. It
provides an environment for application programs to execute efficiently. The key functions of an OS
include:
1. Process Management:
o It schedules processes using algorithms (e.g., FCFS, Round Robin) and ensures
efficient CPU utilization through multitasking and process synchronization.
2. Memory Management:
o Ensures hierarchical storage structures using directories and organizes files based on
types (e.g., text, binary).
4. Device Management:
o Provides device drivers and manages I/O operations through buffering, spooling, and
caching.
6. User Interface:
o Offers CLI for advanced users and GUI for ease of use.
7. Resource Allocation:
o Distributes resources like CPU, memory, and I/O devices among active processes.
9. Networking:
o Includes utilities to track CPU usage, memory allocation, and I/O performance.
o Keeps track of resource usage by various users or jobs, which is essential for systems
like mainframes.
These comprehensive functions make the OS the backbone of any computing system, ensuring
seamless interaction between hardware, software, and users. Examples include Windows for user-
friendly interaction, Linux for flexibility, and macOS for robust performance.
Operating systems can be classified based on functionality, architecture, and the environment they
operate in. Below are the major types:
o Allows multiple users to access the system simultaneously by sharing CPU time.
o Example: UNIX.
o Used in systems requiring strict timing, such as robotics and embedded systems.
o Types:
o Example: MS-DOS.
o Provides a visual interface with icons, windows, and menus for ease of use.
Conclusion
Each OS type caters to specific needs, from personal use to complex real-time applications. For
instance, RTOS ensures precise timing for embedded systems, while distributed OS enhances
resource sharing and efficiency across networks.
3. System Calls
A system call is the interface through which a program interacts with the operating system. It allows
user-level applications to request services from the OS, such as file operations, process management,
and communication. System calls act as an intermediary between the user programs and the kernel.
1. Process Control:
o Examples:
2. File Management:
o Examples:
3. Device Management:
4. Information Maintenance:
o Examples:
5. Communication:
o Examples:
6. Memory Management:
o Examples:
1. A user program issues a system call through a library function (e.g., printf calls write
internally).
3. After execution, the system call returns control to the user program.
Real-Life Applications
4. Process Management
Process management is a critical function of the operating system (OS), responsible for handling all
processes in the system. A process is a program in execution, comprising its code, data, and
execution context.
2. Process Scheduling:
o Types of scheduling:
3. Process Synchronization:
5. Process States:
6. Context Switching:
o Switching the CPU from one process to another involves saving and restoring process
states.
7. Resource Allocation:
o The OS allocates resources like CPU time, memory, and I/O devices to processes.
A PCB is a data structure used by the OS to store information about a process, including:
o Process ID (PID).
o Process state.
Modern operating systems, like Linux, manage thousands of processes using sophisticated
algorithms. For instance:
The Linux scheduler uses a Completely Fair Scheduler (CFS) for balancing CPU time.
Process management ensures that multiple programs can run efficiently while sharing system
resources.
Got it! I'll continue providing detailed answers for all topics one by one.
Process States:
A process transitions through various states during its lifetime. These include:
1. New:
2. Ready:
3. Running:
o Only one process can be in this state per CPU core at any given time.
4. Waiting (Blocked):
5. Terminated:
o The process has completed its execution or has been terminated by the system/user.
6. Suspended:
o The process is temporarily removed from the main memory and placed in secondary
storage to free up resources.
The PCB is a data structure used by the OS to store all essential information about a process.
Contents of PCB:
2. Process State:
5. I/O Information:
6. Scheduling Information:
7. Accounting Information:
Multithreading allows a process to create multiple threads, each capable of executing independently
while sharing resources.
Benefits of Multithreading:
2. Resource Sharing: Threads of a process share memory and resources, reducing overhead.
Types of Multithreading:
2. Kernel-Level Threads: Managed by the OS, offering better performance but higher overhead.
Examples of Multithreading:
Web browsers use threads for loading pages, rendering, and user input.
Scheduling Criteria:
4. Waiting Time: Minimize the time a process spends in the ready queue.
Scheduling Algorithms:
o Non-preemptive.
o Non-preemptive.
o Preemptive.
4. Priority Scheduling:
o Divides processes into priority levels, each with its own scheduling algorithm.
Disk Scheduling
Disk scheduling refers to the management of requests for data on a disk to optimize read/write
efficiency. The objective is to minimize the total seek time, which is the time the disk arm takes to
move between tracks.
1. Seek Time: Time taken for the disk arm to move to the required track.
2. Rotational Latency: Time taken for the desired sector to come under the read-write head.
3. Transfer Time: Time to transfer the data once the head is positioned.
Disk scheduling algorithms decide the order of servicing disk I/O requests.
1. First-Come, First-Served (FCFS): Services requests in the order they arrive, but not optimal as
seek time may be high.
2. Shortest Seek Time First (SSTF): Selects the request closest to the current head position.
3. SCAN (Elevator Algorithm): Moves the head in one direction, servicing requests until it
reaches the end, then reverses direction.
4. C-SCAN (Circular SCAN): Like SCAN but only services requests in one direction, then jumps to
the other end.
5. LOOK and C-LOOK: Variants of SCAN and C-SCAN, but only go as far as the last request in the
current direction.
The Shortest Seek Time First (SSTF) algorithm selects the disk I/O request that is closest to the
current position of the disk head. This minimizes seek time for each individual request.
Working Mechanism:
2. Calculate the distance of each pending request from the current head position.
3. Service the request with the shortest distance (smallest seek time).
Advantages:
Disadvantages:
May lead to starvation for requests far from the current head position, especially if new
nearby requests keep arriving.
Example of SSTF:
Given:
Requests in the queue: 98, 183, 37, 122, 14, 124, 65, 67.
Steps:
|98 - 53| = 45
|37 - 53| = 16
|122 - 53| = 69
|14 - 53| = 39
|124 - 53| = 71
|65 - 53| = 12
|67 - 53| = 14
2. Move to 65:
|98 - 65| = 33
|37 - 65| = 28
|122 - 65| = 57
|14 - 65| = 51
|124 - 65| = 59
|67 - 65| = 2
3. Move to 67:
Order of Servicing:
Seek Sequence:
Would you like me to simulate SSTF for another example or explain another algorithm? 😊
9. Semaphores
Semaphores are synchronization tools used to manage concurrent processes and prevent race
conditions. Introduced by Edsger Dijkstra, semaphores are integral in handling process
synchronization and resource sharing in operating systems.
Types of Semaphores:
1. Binary Semaphore:
o Value is either 0 or 1.
2. Counting Semaphore:
Operations on Semaphores:
1. Wait (P):
o If the value becomes negative, the process is blocked until the value is positive.
2. wait(S):
3. S=S-1
5. Signal (V):
6. signal(S):
7. S=S+1
Applications of Semaphores:
2. Producer-Consumer Problem:
o Producer adds data; consumer removes it, avoiding buffer overflow or underflow.
3. Readers-Writers Problem:
o Synchronizes access to a shared resource where readers can read concurrently, but
writers need exclusive access.
Advantages:
Disadvantages:
10. Deadlock
Deadlock occurs when processes are waiting for resources in a circular chain, and no process can
proceed.
1. Mutual Exclusion:
3. No Preemption:
4. Circular Wait:
o A circular chain of processes exists, where each process is waiting for a resource held
by the next process.
1. Prevention:
2. Avoidance:
4. Recovery:
Let me proceed with Deadlock Prevention, Avoidance, Detection & Recovery, followed by the
remaining topics!
Here are detailed explanations for the topics starting from point 11 of your CSIT641 exam syllabus,
with each answer being at least 300 words:
11. Deadlock
Deadlock is a condition in which two or more processes are unable to proceed because each is
waiting for the other to release resources. It is a significant issue in concurrent programming systems
where processes compete for limited resources. Deadlock occurs when all the following four
necessary conditions are present:
1. Mutual Exclusion: This condition requires that at least one resource must be held in a non-
shareable mode, meaning only one process can access it at a time. If a resource is being used
by one process, others must wait.
2. Hold and Wait: In this situation, a process holding one resource is waiting to acquire
additional resources that are currently being held by other processes.
3. No Preemption: Once a resource has been allocated to a process, it cannot be forcibly taken
away from the process. Resources can only be released voluntarily by the process that holds
them.
4. Circular Wait: This condition occurs when a set of processes form a circular chain, where
each process is waiting for a resource that is held by the next process in the chain.
When all these conditions are present, the system enters a deadlock state. In a deadlock situation, no
process can make progress, leading to inefficiencies and delays. Deadlock detection and resolution
mechanisms are required in systems to identify and deal with such situations. Deadlock can be
prevented, avoided, detected, or recovered from using various strategies like resource allocation
protocols, process termination, and resource preemption.
Deadlock management strategies are designed to prevent, avoid, detect, and recover from deadlock
situations.
Deadlock Prevention
Deadlock prevention aims to eliminate one of the necessary conditions for deadlock to occur. This
can be done by:
Mutual Exclusion: In some cases, resources may be shared instead of allocated exclusively to
a single process. For example, if read-only access to a resource is allowed by multiple
processes simultaneously, it could eliminate the need for mutual exclusion.
Hold and Wait: To prevent this condition, processes may be required to request all the
resources they need at once. This ensures that a process does not hold resources while
waiting for others.
No Preemption: The system could allow resources to be preempted from processes and
reallocated to other processes to break the circular wait.
Circular Wait: Imposing a strict order on resource allocation can prevent circular waiting.
Processes must request resources in a predefined order to ensure that circular wait cannot
form.
Deadlock Avoidance
Deadlock avoidance involves dynamically analyzing resource requests and ensuring that granting a
request will not lead the system into an unsafe state. The Banker's Algorithm is a well-known
deadlock avoidance method. It checks if the requested resources will leave enough resources for
other processes to complete without entering a deadlock.
Deadlock Detection
Deadlock detection algorithms are used to identify whether a system is in a deadlock state. The most
common method is to analyze the resource allocation graph (RAG) to check if there is a cycle in the
graph, which would indicate deadlock.
Deadlock Recovery
Process Termination: Terminating one or more processes involved in the deadlock to break
the cycle.
Resource Preemption: Preempting resources from some processes and reallocating them to
other processes to resolve the deadlock.
Memory management is a crucial function of an operating system (OS) that ensures efficient
utilization of system memory. It is responsible for managing the allocation, deallocation, and tracking
of memory spaces for processes.
1. Primary Memory (RAM): Volatile memory that stores currently executing processes and
programs.
2. Secondary Memory (Disk Storage): Non-volatile memory used for storing data, programs,
and files when not in use.
Memory management techniques(single contiguous allocation and partitional allocation) aim to
make sure that processes and programs have access to sufficient memory and that memory is
allocated and deallocated efficiently. Key functions of memory management include:
Memory Fragmentation: Fragmentation occurs when free memory is scattered across the
system, making it difficult to allocate large contiguous blocks of memory. This can be
managed using strategies like compaction or paging.
2. Paging: Paging divides memory into fixed-size blocks called pages and divides processes into
pages of the same size. This eliminates the problem of fragmentation and allows non-
contiguous allocation.
3. Segmentation: This technique divides the memory into segments based on logical divisions
of a program, such as code, data, and stack. Segmentation allows better management of
variable-sized memory blocks.
Virtual memory is an essential memory management technique that creates the illusion of a larger
main memory than what is physically available by using disk storage as an extension of RAM. This
allows a system to run large applications that exceed the size of physical memory.
Paging: In virtual memory systems, the data is divided into pages. When a program accesses
data that is not in RAM, a page fault occurs, and the required page is swapped in from the
disk.
Page Tables: The operating system maintains a page table to map virtual addresses to
physical addresses. This allows programs to access memory locations that seem contiguous
in virtual memory, even though they may not be physically contiguous in main memory.
Swapping: The OS swaps pages between RAM and secondary storage (usually the hard disk)
as needed. This is called demand paging.
The advantage of virtual memory is that it enables the system to run large applications without
worrying about the physical memory limitations. However, excessive swapping can lead to thrashing,
where the system spends most of its time swapping data instead of executing processes.
15. Thrashing
Thrashing is a situation that occurs when the operating system spends the majority of its time
swapping data between the hard drive and RAM rather than executing processes. This happens when
the system does not have enough physical memory to support all the running processes, causing
excessive paging or swapping.
When the system is thrashing, the CPU spends more time managing memory than executing the
tasks that the user requested. As a result, performance severely degrades, and the system becomes
nearly unusable. This problem is most common when the degree of multiprogramming (the number
of processes running concurrently) is too high for the available memory.
Causes of Thrashing:
1. Insufficient RAM: If a system is running more processes than can be held in physical memory,
the OS will frequently swap pages in and out of memory, leading to thrashing.
2. High Multiprogramming: Running too many processes simultaneously can overwhelm the
available memory resources.
3. Poorly Managed Memory: If the memory manager is not efficient, the system may not
handle memory resources effectively, exacerbating thrashing.
Solutions to Thrashing:
Increasing Physical Memory: Adding more RAM can alleviate the need for extensive paging.
Memory Management Algorithms: Using better algorithms for memory allocation, such as
the Least Recently Used (LRU) page replacement algorithm, can reduce the likelihood of
thrashing.
Here are the detailed answers for the next topics from your CSIT641 exam syllabus:
Paging is a memory management scheme that eliminates the need for contiguous memory
allocation. The idea behind paging is to divide both the logical address space (the process) and the
physical memory (RAM) into fixed-size blocks. The logical address space is divided into equal-sized
pages, and the physical memory is divided into frames. The number of pages depends on the size of
the process, and the number of frames depends on the amount of available physical memory.
A page table is maintained by the operating system, which maps the logical addresses (virtual
addresses) to the physical addresses (frame numbers) in the main memory. Paging ensures that the
entire address space does not need to be loaded into memory at once, thus optimizing memory
usage and minimizing fragmentation.
Demand Paging refers to a paging system where pages are loaded into memory only when they are
needed. Rather than loading all pages of a process at once, which would consume significant
memory resources, demand paging loads pages into memory when a process attempts to access a
page that is not currently in memory, triggering a page fault. When a page fault occurs, the operating
system retrieves the required page from secondary storage (typically the hard disk) and loads it into a
free frame in RAM.
One of the key advantages of demand paging is that it reduces the amount of memory used at any
given time, allowing the system to handle larger programs and multitasking more efficiently.
However, demand paging introduces overhead due to the cost of swapping pages in and out of
memory, especially if a program frequently accesses different parts of its address space.
Page Replacement Algorithms: When physical memory is full and a new page must be loaded, a
page replacement algorithm decides which page to remove from memory to make space for the new
one. Common algorithms include:
Least Recently Used (LRU): Removes the page that has not been used for the longest time.
Optimal: Replaces the page that will not be used for the longest period of time in the future
(this is theoretical, as it requires future knowledge of page access).
A file system is a method used by the operating system to organize and store files on disk storage. It
provides a way for users and applications to access, manage, and manipulate data stored on a
storage medium. The main components and structure of a file system include:
1. File Control Block (FCB): The FCB is a data structure that contains important metadata about
a file. It includes information such as the file name, its location on disk, the file's size,
permissions (read, write, execute), the creation date, and more.
2. File Descriptors: A file descriptor is an identifier that points to an open file. It is used by the
operating system to reference an open file in memory, allowing the system to access the
file's content.
3. Inodes (Index Nodes): Inodes are data structures that store information about files. Each
inode contains metadata about a file, such as its type, size, ownership (user and group), file
permissions, timestamps (created, modified), and pointers to the disk blocks where the
actual file data is stored. Inodes do not store the file name; the name is stored in the
directory entry that points to the inode.
5. Disk Blocks: Disk blocks are the basic units of storage on a disk. Files are stored in these
blocks, and the file system manages the allocation of blocks. The size of a block depends on
the file system and is typically a power of two (e.g., 512 bytes, 1024 bytes, etc.).
File Allocation Methods: The method used to store files on disk is an important part of the file
system design. Common allocation methods include:
Contiguous Allocation: A file is stored in contiguous blocks on the disk. This method is
efficient for accessing files sequentially but leads to fragmentation over time as files are
created and deleted.
Linked Allocation: A file is stored in scattered blocks, with each block containing a pointer to
the next block. This method avoids fragmentation but makes random access slower because
each block must be followed through pointers.
Indexed Allocation: An index block is used to store pointers to all the file's blocks. The file
itself is not stored contiguously, but an index allows the file to be accessed quickly.
Access Control refers to the processes and mechanisms used to regulate who can access resources in
a system and what actions they can perform on those resources. It involves enforcing policies and
rules that determine which users or processes have the rights to access which files, devices, or
memory areas.
1. Authentication: Verifying the identity of a user or process. This is typically done through
credentials such as usernames and passwords, biometrics, or digital certificates.
2. Authorization: After authentication, the system determines what actions the authenticated
entity is allowed to perform. This is done based on the permissions assigned to the entity.
3. Access Control List (ACL): An ACL is a list of permissions associated with a resource (e.g., a
file). Each entry in the list specifies which users or groups have access to the resource and
what type of access (read, write, execute) they have.
4. Role-Based Access Control (RBAC): This model assigns permissions to roles rather than
individual users. Users are then assigned roles, which define what resources they can access
and what operations they can perform.
Protection refers to the system's mechanisms for ensuring that resources are protected from
unauthorized access or modification. This involves setting up policies that restrict who can read,
write, or execute a particular file or resource. Protection also ensures that users cannot modify
system settings or access other users' data unless explicitly allowed.
File Permissions: Files can have different access levels for different users or groups, such as
read, write, and execute permissions.
Encryption: Data can be encrypted to prevent unauthorized users from reading it, even if
they can access the file.
Access Control Mechanisms: These are rules and procedures that enforce policies for file
access, ensuring only authorized users can access sensitive data.
19. File Allocation Structure & Method
File allocation refers to the method by which the operating system stores files on the disk. The choice
of file allocation method affects the efficiency of disk space usage, file access speed, and the ease of
file management. Common file allocation methods include:
1. Contiguous Allocation:
o In this method, each file is allocated a contiguous block of disk space. This means
that all the data blocks for a file are stored next to each other on the disk.
Contiguous allocation is easy to implement and provides fast sequential access to
files.
o Advantages: It is efficient for reading and writing large files, as the data is stored in a
single contiguous area. It also reduces the overhead of searching for data blocks.
o Disadvantages: It can lead to fragmentation over time. As files are created, deleted,
and modified, gaps (fragmentation) appear, making it difficult to allocate contiguous
space for new files.
2. Linked Allocation:
o In linked allocation, each file is stored in scattered blocks, and each block contains a
pointer to the next block in the file. The blocks may not be contiguous on the disk.
o Advantages: This method avoids fragmentation because each file can occupy non-
contiguous blocks. It allows the file system to allocate space dynamically, depending
on available blocks.
o Disadvantages: Linked allocation is slower for random access, as the system needs to
traverse the pointers to locate a file’s data blocks. It also requires additional space for
storing the pointers.
3. Indexed Allocation:
o Indexed allocation uses an index block to store the addresses of the data blocks for a
file. Each file has an associated index block, and the index block contains pointers to
the file's data blocks.
o Advantages: It allows fast access to any block in the file, as the index block directly
points to the data blocks. It avoids fragmentation and supports random access.
o Disadvantages: It requires additional space for the index blocks, which may become
large for files with many data blocks.
Here are the detailed answers for the next set of topics from your CSIT641 exam syllabus:
20. Interrupts
An interrupt is a mechanism used by the CPU to temporarily halt the current execution of a process
and transfer control to a special piece of code, called an interrupt handler or interrupt service
routine (ISR), to address a specific event or condition. Once the interrupt is serviced, control is
returned to the process that was executing before the interrupt occurred.
1. Hardware Interrupts: These are generated by hardware devices, such as I/O devices
(keyboard, mouse, disk), timers, or other peripherals, to inform the CPU that an event has
occurred that requires immediate attention. For example, when the keyboard receives a key
press, it generates a hardware interrupt to notify the CPU.
2. Software Interrupts: These are initiated by software, typically in the form of system calls or
exceptions (errors). Software interrupts are used by programs to request services from the
operating system (e.g., memory allocation, file access).
Interrupts are crucial for multitasking systems because they allow the CPU to respond to external
events while continuing to execute other tasks. The interrupt-driven approach helps improve system
responsiveness and resource utilization.
1. The CPU finishes executing the current instruction and checks if an interrupt is pending.
2. If an interrupt is pending, the CPU saves the current state of the process (using a mechanism
called context saving) to ensure that it can resume execution after handling the interrupt.
4. After the interrupt is serviced, the CPU restores the state of the process, resuming the
execution from where it left off.
Types of Interrupts:
Maskable Interrupts: These interrupts can be temporarily disabled (masked) by the CPU if
needed.
Non-Maskable Interrupts (NMI): These are high-priority interrupts that cannot be disabled
and must be serviced immediately.
Page Replacement is a key concept in virtual memory management. When a system uses paging, it
can run out of physical memory if there are more pages than available frames. In such a case, the
system needs to replace pages in memory to make room for new pages. This is known as page
replacement.
When a page fault occurs (i.e., when a page that a process needs is not currently in memory), the
operating system needs to decide which page to evict (replace) to make space for the required page.
There are various page replacement algorithms used to decide which page to replace.
Least Recently Used (LRU) is one of the most commonly used page replacement algorithms. The LRU
algorithm works based on the assumption that the least recently used pages are more likely to be
replaced in the future. LRU keeps track of the order in which pages were last accessed and evicts the
page that has not been accessed for the longest time.
The operating system maintains a list of all the pages in memory, sorted by the last time they
were accessed. When a page is accessed, it is moved to the front of the list.
When a page fault occurs and there is no free frame, the LRU algorithm evicts the page at
the end of the list (the least recently used page) to make space for the new page.
LRU Implementation:
Counters: One way to implement LRU is by using counters, where each page is assigned a
timestamp or counter value that records when it was last accessed. However, this approach
is inefficient because it requires updating the counters every time a page is accessed.
Stack: Another way to implement LRU is using a stack structure, where the most recently
used page is placed at the top, and the least recently used page is at the bottom. When a
page is accessed, it is moved to the top of the stack.
Advantages of LRU:
It provides an efficient way to approximate the optimal page replacement policy, as it takes
into account the temporal locality of reference.
Disadvantages:
LRU can be expensive to implement in hardware, especially with large memory sizes.
Moreover, it can lead to thrashing if the working set of the process is too large for the
available physical memory.
A device driver is a software component that allows the operating system and applications to
communicate with hardware devices. Each hardware device, such as a printer, keyboard, network
adapter, or storage device, requires a specific driver to enable proper interaction between the
hardware and the software.
1. Abstract Hardware Details: Device drivers provide a layer of abstraction between the
hardware and the operating system, allowing the OS to interact with devices without
needing to know the specific details of the hardware.
2. Control and Manage Devices: Device drivers are responsible for controlling hardware
devices, such as sending data to output devices (e.g., printers) or receiving data from input
devices (e.g., keyboards or mice).
3. Handle Device Interrupts: When a device generates an interrupt, the device driver handles
the interrupt and communicates with the hardware to perform the necessary operation.
4. Manage Data Transfer: Device drivers manage data transfer between the hardware and
memory, ensuring that data is correctly formatted and transmitted.
Character Drivers: These drivers manage devices that transmit data one character at a time,
such as keyboards or serial ports. Character devices are accessed as streams of data.
Block Drivers: These drivers manage devices that store data in fixed-size blocks, such as hard
disks or SSDs. Block devices allow random access to data.
Network Drivers: These drivers manage network interfaces and protocols, enabling
communication between devices over networks.
1. The operating system interacts with device drivers through system calls (e.g., read, write,
open, close).
2. Device drivers interact with the hardware using I/O ports and memory-mapped registers.
3. The OS may provide a generic driver for a class of devices (e.g., generic printer drivers) or
require a specific driver for each device (e.g., proprietary GPU drivers).
Device drivers are often installed automatically by the OS when new hardware is detected. In
some cases, users may need to install or update drivers manually.
The drivers need to be updated regularly to support new hardware features and fix bugs or
vulnerabilities.
Process management is an essential function of the operating system that ensures efficient
execution of processes. A process is an instance of a program in execution, and process management
is responsible for creating, scheduling, terminating, and managing the state of processes.
1. Process Control Block (PCB): The PCB is a data structure used by the OS to store the state of
a process. It includes information such as the process ID, program counter, CPU registers,
memory management information, and I/O status.
2. Process Scheduling: The OS uses a process scheduler to manage the execution of processes.
Processes are placed in a ready queue, and the scheduler decides which process will execute
next based on various algorithms, such as Round Robin, First-Come-First-Serve (FCFS), or
Priority Scheduling.
o Ready: The process is ready to execute but is waiting for CPU time.
o Running: The process is currently executing on the CPU.
o Waiting (Blocked): The process is waiting for an event or resource (e.g., I/O).
First-Come-First-Serve (FCFS): This algorithm schedules processes in the order they arrive in
the ready queue. While simple, FCFS can cause long waiting times if a large process arrives
before shorter ones (convoy effect).
Shortest Job Next (SJN): This algorithm selects the process with the shortest burst time
(execution time) next. It minimizes waiting time but requires knowing the burst time in
advance, which is not always possible.
Round Robin (RR): This is a preemptive scheduling algorithm where each process gets a fixed
time slice (quantum) to execute. If a process does not finish within its quantum, it is placed
back in the ready queue, and the next process is scheduled.
Inter-process Communication (IPC): Processes often need to communicate with each other. This can
be achieved through:
Message Passing: Processes can send and receive messages using system calls like send() and
receive().
Let me know if you need further clarification on any of these topics or additional details!