0% found this document useful (0 votes)
0 views69 pages

Operating System and its Functions

An Operating System (OS) is system software that serves as an interface between users and computer hardware, facilitating program execution. It has evolved from early batch processing systems to complex structures like microkernels and supports various functions including process, memory, file, and device management. Additionally, OS can be classified based on processing methods and user interactions, with key types including batch, interactive, time-sharing, and real-time systems.

Uploaded by

nrajeev610
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views69 pages

Operating System and its Functions

An Operating System (OS) is system software that serves as an interface between users and computer hardware, facilitating program execution. It has evolved from early batch processing systems to complex structures like microkernels and supports various functions including process, memory, file, and device management. Additionally, OS can be classified based on processing methods and user interactions, with key types including batch, interactive, time-sharing, and real-time systems.

Uploaded by

nrajeev610
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

UNIT = 1

 Operating System and its Functions


1. Introduction to Operating System

Definition:
An Operating System (OS) is a system software that acts as an
interface between the user and the computer hardware. It provides an
environment in which a user can execute programs conveniently and
efficiently.

Example:
Popular operating systems include Windows, Linux, macOS, Android,
and iOS.

2. Evolution of Operating Systems

✅ Early Systems (Batch Processing):

 No direct user interaction.


 Jobs were submitted using punched cards.
 Output collected after processing.

✅ Multiprogramming Systems:

 Multiple programs reside in memory.


 CPU is kept busy by switching between jobs.

✅ Time-Sharing Systems:

 Interactive.
 Multiple users share system resources simultaneously.

✅ Real-Time Systems:

 Used in critical applications like air traffic control.

1
 Provides immediate response to input.

3. Objectives of Operating System

Convenience: Makes computer easier to use.

Efficiency: Manages hardware resources for better performance.

Ability to evolve: Allows upgrades and modifications.

Resource Management: Allocates resources like CPU, memory, etc.

4. Functions of Operating System

Below are the major functions, each with sub-points and explanations:

4.1 Process Management

Definition: Manages processes in the system, including process


creation, scheduling, and termination.

Key Points:

 Process Scheduling: Determines the order of process execution


using scheduling algorithms (e.g., FCFS, SJF, Round Robin).
 Process Synchronization: Ensures proper execution order,
avoiding conflicts (e.g., semaphores).
 Deadlock Handling: Detects and resolves deadlocks.

4.2 Memory Management

Definition: Manages primary memory (RAM).

Key Points:

2
 Allocation & Deallocation: Allocates memory to processes and
frees it when not needed.
 Swapping: Moves processes between main memory and disk.
 Virtual Memory: Allows execution of processes larger than
physical memory using paging or segmentation.

4.3 File System Management

Definition: Manages files on storage devices.

Key Points:

 File Organization: Supports hierarchical directory structure.


 File Access: Provides methods to create, read, write, and delete
files.
 File Permissions: Ensures security and access control.

4.4 Device Management

Definition: Manages I/O devices like keyboards, printers, and disks.

Key Points:

 Device Drivers: Software modules that control hardware devices.


 Buffering and Spooling: Techniques to improve device
efficiency.
 Device Allocation: Allocates devices to processes.

4.5 Security and Protection

Definition: Ensures system integrity and protects data.

Key Points:

 Authentication: Verifies user identity.


 Authorization: Controls user access rights.
 Encryption: Protects data from unauthorized access.

3
4.6 User Interface

Definition: Provides interaction between the user and the hardware.

Key Points:

 Command-Line Interface (CLI): Text-based interaction.


 Graphical User Interface (GUI): Visual interaction using windows,
icons, etc.

5. Additional Functions (Advanced)

Networking: Enables communication between systems.

System Performance Monitoring: Tracks system performance.

Accounting: Keeps usage records.

Error Detection and Handling: Detects errors and recovers


gracefully.

Summary Table

Function Description
Process
Handles processes, scheduling, deadlocks
Management
Memory Allocates/deallocates memory, virtual
Management memory
Manages files, directories, access
File Management
permissions
Device
Controls I/O devices, drivers, spooling
Management
Security Protects data and system integrity
User Interface CLI and GUI for user interaction

4
Function Description
Networking Enables system communication
Accounting Tracks resource usage
Error Handling Detects and recovers from errors

 Classification of Operating Systems

Operating Systems can be classified based on processing methods,


number of users, number of processors, and response time. Below are
the main classifications you should focus on:

1 Batch Operating System

Definition:
A batch operating system executes a batch of jobs without user
interaction during execution.

Key Points:

 Jobs are collected, grouped, and processed sequentially.


 Suitable for large, repetitive tasks (e.g., payroll processing).
 No direct user interaction during job execution.
 Example: Early IBM mainframes.

Advantages:

 Efficient for executing similar tasks.


 High CPU utilization.

Disadvantages:

 No interaction with users during execution.


 Difficult to debug.

2 Interactive Operating System

5
Definition:
Allows direct interaction between the user and the computer during
program execution.

Key Points:

 Supports user commands via keyboard or GUI.


 Example: Unix Shell, Windows Command Prompt.
 Immediate feedback provided to the user.

Advantages:

 User can influence program execution.


 Easier debugging and real-time control.

Disadvantages:

 More complex design.


 Increased CPU overhead.

3 Time-Sharing Operating System

Definition:
Allows multiple users to share system resources simultaneously,
giving the illusion of exclusivity.

Key Points:

 CPU time is divided among users using time slices or quantum.


 Fast context switching ensures each user feels like they have their
own system.
 Examples: Unix, Linux.

Advantages:

 Efficient utilization of CPU.


 Supports multiple users and applications.

Disadvantages:

 More overhead due to frequent context switching.


 Security concerns in multi-user environment.

6
4 Real-Time Operating System (RTOS)

Definition:
Designed to process real-time applications with strict timing constraints.

Key Points:

1. Hard Real-Time System: Guarantees critical tasks will


complete on time (e.g., aircraft systems).
2. Soft Real-Time System: Prioritizes critical tasks but allows
some flexibility (e.g., multimedia streaming).

Advantages:

 Predictable and reliable.


 Essential for critical applications.

Disadvantages:

 Complex to design.
 Expensive hardware requirements.

5 Multiprocessor Systems

Definition:
Uses two or more processors sharing the same memory and resources.

Key Points:

1. Symmetric Multiprocessing (SMP): All processors are equal


(e.g., modern servers).
2. Asymmetric Multiprocessing (AMP): One master processor
controls the system.

Advantages:

 Increased throughput and reliability.


 Better system performance.

Disadvantages:

7
 Complex OS design.
 Synchronization issues.

6 Multiuser Systems

Definition:
Allows multiple users to access the computer system simultaneously.

Key Points:

 Each user has a separate session.


 Resources are shared among users.
 Example: Unix, Linux servers.

Advantages:

 Resource sharing.
 Cost-effective for organizations.

Disadvantages:

 Security concerns.
 Resource allocation challenges.

7 Multiprogramming Systems

Definition:
Multiple programs reside in memory at the same time, and CPU switches
between them.

Key Points:

 OS keeps several jobs in memory.


 CPU switches to another job when current one waits for I/O.

Advantages:

 Better CPU utilization.


 Increases system throughput.

Disadvantages:

8
 Complex memory management.
 Risk of deadlocks.

8 Multithreaded Systems

Definition:
An extension of multitasking where a single process can have multiple
threads executing simultaneously.

Key Points:

 Threads share the same memory space.


 Faster context switching between threads.
 Example: Java Threads, POSIX threads.

Advantages:

 Efficient resource sharing.


 Faster execution of concurrent tasks.

Disadvantages:

 Complex to program.
 Risk of data inconsistency (race conditions).

Summary Table

Type of OS Key Feature Example


Groups jobs; no Early IBM
Batch
interaction mainframes
User interaction during Unix Shell,
Interactive
execution Windows CLI
Time-Sharing Multiple users share CPU Linux, Unix
Air traffic
Real-Time Strict timing constraints
control
Multiple CPUs sharing
Multiprocessor Modern servers
memory

9
Type of OS Key Feature Example
Multiple users
Multiuser Unix servers
simultaneously
IBM 360 (early
Multiprogramming Multiple jobs in memory
systems)
Multiple threads per
Multithreaded Java Threads
process

 OPERATING SYSTEM STRUCTURE

An Operating System (OS) is a complex software system that manages


hardware, software, and user interactions. Its structure determines how
different components are organized and how they interact. Let’s
understand this systematically:

1 Layered Structure of Operating System

Definition

A layered structure organizes the OS into hierarchical layers, where


each layer builds upon the services provided by the layer below it.

Key Feature:

 Modularity: Each layer has well-defined responsibilities.


 Abstraction: Higher layers don’t need to know hardware details.
 Encapsulation: Lower layers hide their implementation details.

Diagram

+-------------------------+
| User Programs / Shell | ← Top Layer (User Interaction)
+-------------------------+
| System Utilities | ← Service Layer
+-------------------------+

10
| File System | ← File Management Layer
+-------------------------+
| I/O Management | ← I/O Layer
+-------------------------+
| Memory Management | ← Memory Layer
+-------------------------+
| Process Management | ← Process Layer
+-------------------------+
| Hardware Interaction | ← Hardware Abstraction Layer
+-------------------------+
| Hardware | ← Bottom Layer
+-------------------------+

Advantages

 Easier design, debugging, and maintenance.


 Isolation of errors: Errors in one layer don’t affect others.
 Supports upgrades and enhancements layer-wise.

Disadvantages

 Can be less efficient (due to overhead of crossing layers).


 Rigid structure might not support all features efficiently.

2 System Components of Operating System

The OS is composed of several key components, each responsible for


specific tasks:

(a) Process Management

 Handles creation, scheduling, and termination of processes.


 Ensures synchronization and communication between
processes.
 Provides deadlock handling.

(b) Main Memory Management

 Keeps track of which parts of memory are in use and by


which process.

11
 Allocates and deallocates memory as needed.
 Implements virtual memory (if supported).

(c) File System Management

 Organizes and manages files and directories on storage


devices.
 Controls file creation, deletion, read, write, access
permissions.

(d) I/O System Management

 Manages input/output devices using device drivers.


 Provides buffering, spooling, and error handling.

(e) Secondary Storage Management

 Manages hard disks, SSDs, and other storage devices.


 Keeps track of free space, storage allocation, and file
placement.

(f) Networking

 Manages network connections and communication.


 Supports protocols, sockets, and network file systems.

(g) Protection and Security

 Controls access to resources (authorization and


authentication).
 Protects against unauthorized access and malware.

3 Operating System Services

An OS provides several essential services to users and application


programs:

Service Description
Program Loads and runs programs; provides environment
Execution for process execution.

12
Service Description
Handles input/output requests, including device
I/O Operations
drivers.
File System Provides file operations like create, delete,
Manipulation read, write, and permissions.
Communication Enables processes to communicate (via shared
Services memory or message passing).
Detects and recovers from system and application
Error Detection
errors.
Resource Manages CPU time, memory, and I/O devices among
Allocation users/programs.
Security & Ensures only authorized access to resources and
Protection protects user data.

Summary Table

Section Description
Layered Organizes OS into hierarchical layers for
Structure modularity and abstraction.
System Process, Memory, File, I/O, Storage,
Components Networking, Security.
Execution, I/O handling, File management,
OS Services Communication, Error detection, Security,
etc.

 Operating System Structure

4 Reentrant Kernels

Definition:
A reentrant kernel is a kernel that can be safely shared among
multiple processes. It means that multiple processes can execute kernel
code simultaneously without interfering with each other.

Key Features:

13
 Code Sharing: Kernel code is read-only, so multiple processes
can use I simultaneously.
 No Global Variables: Uses private process stacks or context
blocks instead of global data.
 Concurrency Support: Supports interrupts and system calls
even while another process is using the kernel.

Advantages:

 Allows concurrent execution of processes in kernel mode.


 Improves CPU utilization.
 Enables better responsiveness for interrupts and system calls.

Disadvantages:

 Complex to design and implement.


 Requires careful management of process-specific data.

Example:
Modern Unix and Linux kernels are reentrant because they allow
multiple processes to be inside the kernel concurrently.

5 Monolithic Systems

Definition:
A monolithic kernel is a single large process that runs entirely in a single
address space in kernel mode. All OS services—such as process
management, file system, device drivers, etc.—run together.

Key Features:
✅ Entire OS runs in kernel mode.
✅ All services are tightly coupled.
✅ Communication between services uses direct procedure calls.

Advantages:

 Fast execution due to direct communication.


 Simple design compared to layered or modular systems.

Disadvantages:

14
 Difficult to maintain and debug (a bug in one service can crash
the entire OS).
 Adding new features can be challenging.
 Less modular, harder to port to different hardware.

Examples:

a) Traditional Unix kernel.


b) Early versions of Linux (although Linux now has some modular
capabilities).

6 Microkernel Systems

Definition:
A microkernel structure minimizes the core kernel to essential services
like inter-process communication (IPC), basic scheduling, and low-
level hardware management. Other services (file systems, device
drivers, network stack) run in user space as separate processes.

Key Features:
✅ Small, minimal kernel running in kernel mode.
✅ Most OS services run as user-space servers.
✅ Communication between services uses message passing.

Advantages:

 High modularity (easy to add, remove, or update services).


 Better fault isolation (failure in one service does not crash the
kernel).
 Easier to port and maintain.

Disadvantages:

 Slower performance due to frequent message passing.


 More complex inter-process communication.

Examples:

a) Mach microkernel (used in early Mac OS X).


b) QNX (real-time OS).
c) Minix (educational OS).

15
UNIT = 2

 Concurrent Processes

Concurrency allows multiple processes to execute simultaneously or


appear to do so, enabling efficient resource utilization and
responsiveness in an OS. Let’s explore this systematically:

1 Process Concept

Definition:
A process is an instance of a program in execution, with its own address
space, stack, data, and code.

Key Points:
✅ Contains program counter, registers, and variables.
✅ OS uses a Process Control Block (PCB) to manage each process.
✅ Processes may be independent or cooperating (e.g., sharing data).

2 Principle of Concurrency

Definition:
Concurrency means multiple processes execute in overlapping time
periods—either actually simultaneously (on multiprocessor systems) or
by rapid switching (time-sharing).

Key Points:
✅ Enhances resource utilization (CPU, I/O).
✅ Supports multi-user systems.
✅ Introduces challenges like race conditions, deadlocks, and
synchronization.

3 Producer/Consumer Problem

16
Definition:
A classic example of inter-process communication (IPC) where:

 The producer generates data and places it into a buffer.


 The consumer removes data from the buffer.

Key Issues:
✅ Ensuring the producer doesn’t overwrite a full buffer.
✅ Ensuring the consumer doesn’t read an empty buffer.

Solutions:

 Use semaphores or monitors for synchronization.


 Example: bounded buffer problem.

4 Mutual Exclusion

Definition:
Ensures that only one process at a time can access a shared resource
(e.g., a variable or file).

Critical Points:
✅ Necessary to prevent race conditions.
✅ Implemented using critical sections.

5 Critical Section Problem

Definition:
A critical section is a part of the program where the process accesses
shared resources.

Solution Requirements (Three Conditions):

1) Mutual Exclusion: Only one process executes in the critical


section at a time.
2) Progress: No process outside the critical section should block
others from entering it.
3) Bounded Waiting: Each process must get a chance to execute its
critical section after a finite time.

17
6 Dekker’s Solution

Definition:
One of the first software solutions to the critical section problem for two
processes.

Key Points:
✅ Uses flags to indicate a process’s desire to enter the critical section.
✅ Uses a turn variable to decide who enters next.
✅ Ensures mutual exclusion, progress, and bounded waiting.

Drawback:

Complex for more than two processes.

7 Peterson’s Solution

Definition:
A simpler and elegant solution to the two-process critical section
problem.

Key Points:
✅ Uses two flags (flag[0], flag[1]) and a turn variable.
✅ Ensures mutual exclusion, progress, and bounded waiting.

Algorithm Sketch:

flag[i] = true;
turn = j;
while (flag[j] && turn == j);
// critical section
flag[i] = false;

(i = current process, j = other process)

8 Semaphores

18
Definition:
A synchronization primitive introduced by Dijkstra to control access to
shared resources.

Types:

1) Binary Semaphore: 0 or 1 (like a lock).


2) Counting Semaphore: Allows multiple units of a resource.

Operations:

 wait (P): Decrements the semaphore. Blocks if <0.


 signal (V): Increments the semaphore. Wakes a blocked process if
any.

9 Test-and-Set Operation

Definition:
A hardware atomic instruction used for implementing mutual exclusion.

Working:
✅ Tests the lock variable and sets it atomically.
✅ If lock is 0 (unlocked), sets it to 1 (locked) and proceeds.
✅ If lock is already 1, keeps spinning (busy waiting).

Drawbacks:

 Can lead to busy waiting (wastes CPU cycles)


 Works well in multiprocessor systems.

Summary Table

Concept Description
A program in execution with its own
Process Concept
resources and state.
Overlapping execution of processes,
Concurrency
enabling multitasking.
Producer/Consumer Classic IPC problem involving a buffer

19
Concept Description
shared by producer and consumer.
Ensures exclusive access to shared
Mutual Exclusion
resources.
Part of code accessing shared resources;
Critical Section
must be protected.
First software solution for 2-process
Dekker’s Solution
mutual exclusion.
Peterson’s Simpler solution using flags and turn
Solution variable.
OS synchronization primitive (binary or
Semaphores
counting).
Atomic hardware instruction for mutual
Test-and-Set
exclusion.

 Classical Problems in Concurrency

Operating systems use these problems as case studies to understand,


design, and test synchronization solutions in concurrent systems.

1 Dining Philosopher Problem

Problem Statement:

 Proposed by Dijkstra in 1965.


 Imagine N philosophers sitting around a circular table.
 Each philosopher has a plate of spaghetti and needs two forks
(left and right) to eat.
 Philosophers alternate between thinking and eating.

Key Issues:

✅ Mutual Exclusion: A fork (shared resource) cannot be used by two


philosophers at once.

20
✅ Deadlock: If every philosopher picks up their left fork and waits for
the right one, deadlock occurs.

✅ Starvation: Some philosophers might never get both forks.

Requirements:

 Prevent deadlock (no circular wait).


 Prevent starvation (every philosopher eventually eats).

Possible Solutions:

1) Resource Hierarchy Solution: Philosophers pick the lower-


numbered fork first, breaking circular wait.
2) Allow at most (N-1) philosophers to sit at the table: At least one
philosopher can always eat.
3) Semaphore Solution:

 Each fork represented by a binary semaphore.


 A global semaphore to limit the number of philosophers who
can sit down.

Example (Semaphore):

// Pseudocode
Semaphore forks[N]; // Initialize all forks to 1
Semaphore mutex = N-1; // At most N-1 philosophers at the
table
// Philosopher i's algorithm
wait(mutex);
wait(forks[i]);
wait(forks[(i+1)%N]);// eat
signal(forks[i]);
signal(forks[(i+1)%N]);
signal(mutex);

Diagram:

[P0]---fork---[P1]
| |
fork fork
| |

21
[P4]---fork---[P2]
| |
fork fork
| |
[P3]---fork---[P4]

2 Sleeping Barber Problem

Problem Statement:

Models a barber shop with:

1) 1 barber who sleeps when there are no customers.


2) n waiting chairs.
3) Customers come at random times.

Key Issues:

1) Mutual Exclusion: Access to waiting chairs must be synchronized.


2) Synchronization: Barber must sleep when no customers, and be
woken up by customers.
3) Deadlock-Free: Ensure no customer waits forever.

Scenario:

 If the barber is cutting hair:

The next customer sits in a waiting chair or leaves if all chairs


are occupied.

 If the barber is asleep:

The arriving customer wakes him.

Solution (Using Semaphores):

✅ Semaphore customers: Counts waiting customers.


✅ Mutex: For mutual exclusion on chairs.

Example (Semaphore):

Semaphore customers = 0;

22
Semaphore barber = 0;
Semaphore mutex = 1;
int waiting = 0, chairs = n;
// Customer's process
wait(mutex);
if (waiting < chairs) {
waiting++;
signal(customers); // Wake barber
signal(mutex);
wait(barber); // Wait for barber to be ready
// get haircut
} else {
signal(mutex);
// leave shop
}
// Barber's process
while (true) {
wait(customers); // Sleep if no customers
wait(mutex);
waiting--;
signal(barber); // Call customer for haircut
signal(mutex);
// cut hair
}

Comparison Table

Feature Dining Philosopher Sleeping Barber


Resource allocation Producer-consumer
Concept
(forks) (customers)
Mutual exclusion,
Challenge Deadlock, starvation
sleep/wake
Semaphores, resource Semaphores (customers,
Solution
hierarchy barber, mutex)
Example n philosophers 1 barber, n chairs

23
 INTERPROCESS COMMUNICATION (IPC)

Definition:
Interprocess Communication (IPC) refers to the mechanisms an
Operating System provides to allow processes to exchange data and
synchronize their actions.

1 IPC Models

A. Message Passing Model

Concept:
Processes communicate by sending and receiving messages (packets of
data).

 Each process has a send() and receive() primitive.


 Useful for distributed systems or non-shared-memory systems.

Advantages:

 No shared variables → simpler to implement in distributed OS.


 Easy to protect processes from each other.

Disadvantages:
❌ Slower than shared memory (due to copying overhead).
❌ Complex message management (buffering, delivery).

B. Shared Memory Model

Concept:
Processes share a region of memory.

 Processes can read/write this region directly.


 Needs synchronization (semaphores, mutexes) to avoid
conflicts.

Advantages:

24
 Faster than message passing (no copying overhead).
 Direct data exchange.

Disadvantages:
❌ Complex synchronization (critical section problem).
❌ Harder to debug race conditions.

2 IPC Schemes

A. Direct Communication

Concept:

 Processes must name each other explicitly (e.g. send(P, msg)).


 Communication is one-to-one.

Example:

send(P1, msg)
receive(P2, msg)

B. Indirect Communication

Concept:

 Processes send/receive through mailboxes (ports).


 Mailbox acts as a buffer or queue.
 Many-to-many communication possible.

Example:

send(mailbox1, msg)
receive(mailbox1, msg)

C. Synchronous vs. Asynchronous Communication

Type Description

25
Sender waits until receiver is ready. Receiver
Synchronous
waits if no message.
Sender continues without waiting; receiver
Asynchronous
collects messages later.

D. Buffering

Concept:

Messages can be buffered temporarily (queue).

Types of buffers:

a) Zero Capacity (No Buffer): Sender waits until receiver


ready.
b) Bounded Capacity: Limited buffer size (sender may block
if full).
c) Unbounded Capacity: Infinite buffer (practically, a large
buffer).

Summary Table of IPC

Model Description Advantages Disadvantages


Simpler,
Message send/receive
distributed Slower
Passing messages
systems
Shared shared data Needs
Faster
Memory region synchronization
Scheme Description
Direct Communication Explicit process names
Indirect Communication Mailboxes/ports
Synchronous/Asynchronous Blocking/non-blocking
Buffering Message queues

26
 PROCESS GENERATION

Definition:
Process generation refers to how new processes are created in the
operating system.

1 Mechanisms of Process Generation

1. System Initialization

 During OS boot, some essential system processes (e.g. init,


daemons) are created.
 These processes run in the background to provide OS services.

2. User Request

 Users create processes by running programs or commands.


 For example, running notepad.exe creates a new process.

3. Parent Process (Fork/Exec Model)

 In UNIX-like systems, processes can create child processes using


thefork() system call.

fork(): Duplicates the calling process.

exec(): Replaces the process image with a new program.

 Parent-Child Relationship:

Child inherits some resources (file descriptors) from parent.

Parent may wait for child to complete.

2 Process Hierarchies

27
1. Hierarchical Model:

Parent and child form a tree structure.

Easy to manage parent-child relationships.

2. Non-Hierarchical Model:

Processes are independent (e.g. Windows OS).

Example: UNIX Process Creation

pid_t pid;
pid = fork();
if (pid == 0) {
// Child process
exec("/bin/ls");
} else {
// Parent process
wait();
}

INTERPROCESS COMMUNICATION (IPC) WITH DIAGRAMS

1 IPC Models

A. Message Passing Model

Diagram:

+---------+ send(msg) +---------+


| Process |------------------------>| Process |

28
| A | | B |
+---------+ +---------+

Explanation:

 Process A sends a message to Process B using a send()


operation.
 Process B receives the message using receive().

B. Shared Memory Model

Diagram:

+---------+ +----------------+ +--------


-+
| Process |<-------> | Shared Memory | <-------> | Process
|
| A | | Region | | B
|
+---------+ +----------------+ +--------
-+

Explanation:

 Both processes access a shared region to exchange data.


 Synchronization (e.g. semaphores) is used to avoid conflicts.

2 IPC Schemes

Direct Communication

Diagram:

Process A ----send(msg)----> Process B


Process B ----receive(msg)----> Process A

 Notes:

29
Explicitly names sender and receiver.

Indirect Communication (Mailboxes)

Diagram:

+------------+
| Mailbox |
+------------+
/ \
Process A Process B
| |
send(msg) receive(msg)

 Notes:

Messages go to a mailbox/port, which acts as a buffer.

Buffering

Types:

Buffer Type Diagram


Zero Capacity Send blocks until receive is ready.
Bounded Capacity Buffer has finite slots.
Unbounded Capacity Infinite slots; sender never blocked.

IPC Models Summary Table

Model Diagram Summary


Message Passing A → msg → B
Shared Memory A ↔ Shared Memory ↔ B

PROCESS GENERATION WITH DIAGRAMS

30
1 Fork-Exec Model (UNIX)

Flowchart:

Flowchart:

Parent Process
|
| fork()

+----------------+
| Child Process |
| (duplicate of |
| parent) |
+----------------+
|
| exec(program)

+----------------+
| New Program |
| (e.g. /bin/ls) |
+----------------+

Explanation:

 fork() duplicates the parent.


 exec() replaces child’s image with a new program.

2 Process Tree (Harierchical)

Diagram:

Parent Process (P0)


|
+----+----+

31
| |
P1 P2
|
P3

Explanation:

 Parent process P0 spawns P1 and P2.


 P1 spawns P3, forming a tree structure.

Suggested Pseudocode for UNIX Fork/Exec:

pid_t pid;
pid = fork();
if (pid == 0) {
// Child process
exec("/bin/ls");
} else {
// Parent process
wait(); // waits for child to complete
}

Summary Table

Concept Diagram Summary


Message Passing A → msg → B
Shared Memory A ↔ Shared Memory ↔ B
Direct Communication A → msg → B (explicit names)
Indirect Communication A → mailbox → B
Fork-Exec Model Parent → fork() → Child → exec()
Process Hierarchy P0 → (P1, P2) → (P3)

UNIT = 3

32
 CPU SCHEDULING

CPU Scheduling is the process by which the operating system decides


which process gets to use the CPU at a particular time. It’s a crucial
part of multiprogramming.

1 Scheduling Concepts

Definition:

CPU Scheduling determines the order in which processes execute on the


CPU, aiming for efficient and fair utilization of CPU time.

Key Concepts:

Burst Time (BT): Time required by a process on CPU.

Arrival Time (AT): Time when process enters the ready queue.

Completion Time (CT): Time when process completes execution.

Turnaround Time (TAT): CT - AT.

Waiting Time (WT): TAT - BT.

2 Performance Criteria

Criterion Description
CPU Keep CPU as busy as possible (e.g., 90–
Utilization 100%).
Number of processes completed per unit
Throughput
time.
Turnaround Time from submission to completion of a
Time process.
Time a process spends waiting in the ready
Waiting Time
queue.
Response Time Time from submission to first response.

33
Criterion Description
Fairness Ensures all processes get CPU time fairly.

3 Process States

A process moves through different states during its lifecycle:

State Description
New Process is being created.
Ready Process is waiting to be assigned to CPU.
Running Process is currently executing on CPU.
Process is waiting for some event (I/O
Waiting/Blocked
completion).
Terminated Process has finished execution.

4 Process Transition Diagram

Diagram:

+---------+
| New |
+---------+
|
V
+---------+
| Ready |
+---------+
|
V
+---------+
| Running |
+---------+
|
+-------+-------+
| |
V V

34
+---------+ +-----------+
| Waiting | | Terminated|
+---------+ +-----------+
^
|
+---------+
| I/O |
+---------+

Explanation:

New → Ready: Process admitted.

Ready → Running: CPU scheduler dispatches process.

Running → Waiting: Process requests I/O or waits.

Running → Terminated: Process completes execution.

Waiting → Ready: I/O completes.

5 Schedulers

Schedulers are special system software that select which process to run
next.

Types:

Scheduler Description
Long-Term Selects processes from job pool (on disk) and
Scheduler (Job loads them into main memory. Controls the
Scheduler) degree of multiprogramming.
Medium-Term Handles swapping (suspends/resumes processes)
Scheduler to improve memory management.
Short-Term
Selects from ready processes and assigns CPU.
Scheduler (CPU
Runs very frequently (milliseconds).
Scheduler)

35
Summary Table

Section Key Points


Scheduling
Decides order of CPU execution.
Concepts
Performance CPU utilization, throughput, turnaround,
Criteria waiting, response, fairness.
Process States New, Ready, Running, Waiting, Terminated.
Process Diagram Shows transitions between states.
Schedulers Long, Medium, Short term.

 Process Control Block (PCB)

Definition:
A Process Control Block (PCB) is a data structure maintained by the OS
for each process. It contains essential information required to manage a
process.

Contents of PCB:

Component Description
Process ID (PID) Unique identifier for the process.
New, Ready, Running, Waiting, or
Process State
Terminated.
Program Counter Address of the next instruction to
(PC) execute.
Values of CPU registers (accumulator,
CPU Registers
index, stack pointer).
Memory Management
Base and limit registers, page tables.
Info
Accounting Info CPU time used, time limits, job priority.
I/O Status Info List of open files, I/O devices assigned.

36
Component Description
Scheduling Info Priority, scheduling state, queues, etc.

Diagram:

+-------------------------+
| Process ID (PID) |
+-------------------------+
| Process State |
+-------------------------+
| Program Counter |
+-------------------------+
| CPU Registers |
+-------------------------+
| Memory Mgmt Info |
+-------------------------+
| Accounting Info |
+-------------------------+
| I/O Status Info |
+-------------------------+
| Scheduling Info |
+-------------------------+

 Process Address Space

Definition:
It refers to the range of memory addresses a process can access during
its execution.

Sections of Address Space:

+--------------------------+
| Text Segment | ← Executable code
+--------------------------+
| Data Segment | ← Global and static variables
+--------------------------+
| Heap Segment | ← Dynamically allocated
memory

37
+--------------------------+
| Stack Segment | ← Function calls, local
variables
+--------------------------+

Explanation:
Each process has its own isolated address space to prevent accidental or
malicious access to others.

Process Identification Information

Purpose:
To uniquely identify and manage processes.

Components:

1) PID (Process ID): Unique number assigned by the OS.


2) PPID (Parent Process ID): Link to the parent process.
3) UID/GID: User and Group IDs (for security).

 Threads and Their Management

Definition:
A thread is the smallest unit of CPU execution within a process.

Types of Threads:

1. User-Level Threads (ULTs): Managed by user-level libraries.


2. Kernel-Level Threads (KLTs): Managed directly by the OS
kernel.

Thread Management Operations:

Operation Description
Thread Creation Create new threads within a process.
Thread Termination End a thread’s execution.
Thread Ensure correct sequence of thread
Synchronization execution (e.g., mutex, semaphores).

38
Operation Description
Thread Scheduling Determine which thread runs next.

 Scheduling Algorithms

1. First-Come, First-Served (FCFS):

Non-preemptive; simplest algorithm.

Drawback: Convoy effect (short jobs stuck behind long ones).

2. Shortest Job First (SJF):

Selects the shortest burst time process.

Drawback: May cause starvation for long processes.

3. Shortest Remaining Time First (SRTF):

Preemptive version of SJF.

4. Priority Scheduling:

Highest priority process runs first.

Drawback: Starvation (can use aging to prevent).

5. Round Robin (RR):

Time quantum assigned to each process in a cyclic order.

Good for: Time-sharing systems.

6. Multilevel Queue Scheduling:

Divides processes into queues (interactive, batch, etc.) with


different priorities.

 Multiprocessor Scheduling

39
Definition:
Scheduling processes on systems with multiple CPUs or cores.

Key Considerations:

1. Load Sharing: Distributes work evenly across CPUs.


2. Symmetric Multiprocessing (SMP): Each processor is self-
scheduling using a common ready queue.
3. Asymmetric Multiprocessing (AMP): One processor is
master; others execute tasks assigned by master.

Challenges:

 Processor affinity: Processes might prefer certain CPUs due to


caching.
 Synchronization: Threads on different CPUs might need
coordination.

Summary Table

Topic Key Points


Data structure holding process
PCB
info.
Process Address Space Text, data, heap, stack segments.
Process ID Info PID, PPID, UID.
Threads ULTs, KLTs, management operations.
FCFS, SJF, SRTF, Priority, RR,
Scheduling Algorithms
Multilevel.
Multiprocessor
SMP, AMP, load sharing, affinity.
Scheduling

 Deadlock in Operating Systems

A deadlock occurs when a set of processes are blocked because each


process is holding a resource and waiting for another resource acquired
by some other process. Let’s dive into its details:

40
1 System Model

Definition:
A system consists of processes and resources. Processes request
resources (like printers, memory, CPU) and may hold some while
requesting more.

Components:

Processes: Active entities that request resources.

Resources: Divided into types (CPU, I/O devices, memory


segments).

Resource Instances: Each resource type may have multiple


instances.

Resource Allocation Graph (RAG):

 Vertices: Represent processes (circles) and resources


(squares).
 Edges:

Request edge (P → R): Process requests resource.

Assignment edge (R → P): Resource assigned to process.

Example Diagram:

+---+ +---+
| P1|----->| R1|
+---+ +---+
|
v
+---+
| P2|
+---+

2 Deadlock Characterization

41
A deadlock situation arises if the following four necessary conditions
hold simultaneously (Coffman Conditions):

Condition Description
Mutual
At least one resource must be non-sharable.
Exclusion
A process holding resources is waiting for
Hold and Wait
additional resources held by others.
No Preemption Resources cannot be forcibly taken away.
A set of processes are waiting in a circular
Circular Wait
chain.

3 Deadlock Prevention

Idea:
Ensure at least one of the Coffman conditions cannot hold.

Condition Prevention Strategy


Mutual Not always practical (e.g., printer cannot be
Exclusion shared).
Hold and Require processes to request all resources at
Wait once (may cause low resource utilization).
No If a process holding resources requests more,
Preemption forcibly release held resources.
Impose a linear ordering of resource types
Circular
(number them) and require processes to request
Wait
resources in increasing order.

4 Deadlock Avoidance

Idea:
Use additional information to decide whether or not to grant a request.

Banker’s Algorithm:

 Developed by Dijkstra.

42
 Checks if granting a resource request leads to a safe state
(no deadlock).
 Processes declare maximum resource needs in advance.

Safe State Example:


If all processes can complete without deadlock (by granting resources in
some order), the state is safe.

5 Deadlock Detection

Idea:
Allow deadlocks to occur but detect and resolve them.

Methods:

1. Resource Allocation Graph (RAG):

For single instances of each resource: Look for cycles.

For multiple instances: Use wait-for graph (WFG).

2. Wait-For Graph (WFG):

Nodes: Processes.

Edge P1 → P2: P1 is waiting for a resource held by P2.

A cycle means deadlock.

6 Recovery from Deadlock

Approaches:

Approach Description
Process Abort one or more processes to break the
Termination cycle.
Resource Temporarily take resources from some
Preemption processes and give them to others.

43
Issues with Recovery:

 Which processes to kill? Based on priority, time consumed,


resources held, etc.
 Starvation possibility.

Summary Table

Section Key Points


System Model Processes, resources, RAG.
Deadlock Mutual exclusion, hold and wait, no
Characterization preemption, circular wait.
Prevention Eliminate one of the four conditions.
Avoidance Banker’s Algorithm, safe state.
Detection RAG (cycle), WFG (cycle).
Terminate processes, resource
Recovery
preemption.

UNIT = 4

 Memory Management

Memory management is a core function of the operating system, ensuring


efficient and safe allocation and deallocation of memory to processes.

1 Basic Bare Machine

Definition:
The simplest memory management model where the entire memory is
available for a single process at a time.

Characteristics:

44
 No OS protection.
 No user/application protection.
 Application can overwrite OS or other applications.
 No concept of multitasking.

Diagram:

+----------------------------+
| Operating System + User |
| Program share same space |
+----------------------------+

Advantages:
✅ Simple to implement.
✅ Direct hardware access.

Disadvantages:
❌ No protection between user and OS memory.
❌ Only one program can run at a time.
❌ Cannot support multitasking.

2 Resident Monitor

Definition:
A basic OS that resides in a fixed portion of memory, leaving the rest
for user processes.

Characteristics:

 OS loaded at boot time.


 User programs loaded and executed in the free memory.
 Simple protection (OS memory region is protected from user
programs).

Diagram:

+----------------------+
| Resident Monitor | ← OS occupies this region
+----------------------+
| User Program Area | ← User program loaded here

45
+----------------------+

Advantages:
✅ OS always available.
✅ Provides minimal protection for OS.

Disadvantages:
❌ No protection between user programs.
❌ No multiprogramming support.
❌ CPU is idle during I/O.

3 Multiprogramming with Fixed Partitions

Definition:

Memory is divided into fixed-sized partitions, each holding


one process. Supports multiprogramming.

Characteristics:

 OS resides in low memory.


 Memory is divided into N partitions of fixed size.
 Each partition holds one process.
 When a process finishes, the partition is freed for a new
process.

Diagram:

+--------------------+
| OS Memory | ← Resident OS
+--------------------+
| Partition 1 | ← Process 1
+--------------------+
| Partition 2 | ← Process 2
+--------------------+
| Partition 3 | ← Process 3
+--------------------+

46
Advantages:
✅ Simple to implement.
✅ Allows multiprogramming (CPU utilization
improved).

Disadvantages:
❌ Internal Fragmentation: Wasted space within
partitions.
❌ Fixed number of partitions limits process concurrency.
❌ Process size must fit a partition.

Summary Table

Memory
Management Key Features Advantages Disadvantages
Model
No OS
protection,
Bare No multitasking,
direct Simple, direct
Machine no protection
hardware
access
OS occupies
No user-user
Resident fixed OS protection,
protection, no
Monitor memory, user always resident
multiprogramming
area below
Memory
Internal
Fixed divided into Supports
fragmentation,
Partitions fixed multiprogramming
fixed concurrency
partitions

1 Multiprogramming with Variable Partitions

Definition:
Memory is divided into variable-sized partitions according to the size of
incoming processes. This approach eliminates internal fragmentation
but introduces external fragmentation.

47
Working:

 Initially, all memory (except OS) is available.


 As processes arrive, memory is allocated exactly as needed.
 When a process terminates, its partition is freed.
 External Fragmentation: Free memory holes scattered
throughout.

Diagram:

+----------------------+
| OS |
+----------------------+
| P1 |
+----------------------+
| P2 |
+----------------------+
| Free Hole |
+----------------------+
| P3 |
+----------------------+

Advantages:
✅ Better memory utilization (less wasted space).
✅ Processes can be allocated memory as needed.

Disadvantages:
❌ External fragmentation.
❌ Need for compaction (to combine free spaces).

2 Protection Schemes

Definition:
Mechanisms to ensure process isolation and prevent one process from
accessing another’s memory.

Methods:
Base and Limit Registers:

 Each process has a base address and limit defining its


memory range.

48
 Hardware checks every address reference.

Diagram:

Physical Memory:
+-----------------------+
| Base | Limit |
| Addr | Register |
+-----------------------+

Segmentation and Paging:

Each segment/page has protection bits.

Advantages:
✅ Prevents processes from corrupting each other.
✅ Provides safe multitasking.

3 Paging

Definition:
Divides memory into fixed-size blocks:

 Pages: Fixed-size blocks in logical memory.


 Frames: Fixed-size blocks in physical memory.

How It Works:

 Pages are mapped to frames using a page table.


 No external fragmentation; small internal fragmentation.

Diagram:

Logical Memory:
Page 0 → Frame 5
Page 1 → Frame 3
Page 2 → Frame 8

Advantages:
✅ Eliminates external fragmentation.
✅ Easy to swap pages in/out.

49
Disadvantages:
❌ Overhead of page tables.
❌ May cause internal fragmentation in the last page.

4 Segmentation

Definition:
Divides memory into variable-sized segments, based on logical
divisions (e.g., code, data, stack).

How It Works:

 Each segment has a base and limit register.


 Logical address: (Segment Number, Offset).

Diagram:

Segments:
0: Code
1: Data
2: Stack

Advantages:
✅ Reflects logical program structure.
✅ Easier protection (different segments for different
uses).

Disadvantages:
❌ External fragmentation.
❌ Compaction may be needed.

5 Paged Segmentation

Definition:
Combines paging and segmentation:

Segmentation: Divides program logically.

Paging: Each segment is divided into pages.

50
Diagram:

Logical Address:
Segment Number + Page Number + Offset

Advantages:

 Combines benefits of both segmentation (logical grouping) and


paging (eliminates external fragmentation).
 Flexible and efficient.

6 Virtual Memory Concepts

Definition:
Allows processes to use more memory than physically available by
paging/swapping parts of processes in and out of disk storage.

Key Concepts:

 Demand Paging:

Pages are loaded into memory only when needed.

 Page Fault:

When a process references a page not in memory, OS loads it


from disk.

 Swapping:

Entire processes can be swapped in and out.

Advantages:
✅ Supports large processes.
✅ Better CPU utilization.
✅ Simplifies programming.

Disadvantages:
❌ Thrashing (too much paging activity).
❌ Disk I/O overhead.

51
Summary Table

Topic Key Points


Dynamic allocation, external
Variable Partitions
fragmentation.
Base-limit registers, segmentation,
Protection Schemes
paging.
Fixed-size pages and frames, page
Paging
tables.
Logical program division, variable
Segmentation
size segments.
Paged Segmentation Segments split into pages.
Virtual Memory
Demand paging, page faults, swapping.
Concepts

1 Demand Paging

Definition:
Demand paging is a virtual memory technique where pages are loaded
into memory only when they are needed, instead of preloading the entire
process.

How It Works:

 Process starts with no pages in physical memory.


 On the first reference to a page:

 If the page is not in memory, a page fault occurs.


 The OS loads the page from disk into memory.
 Updates the page table.

Diagram:

Logical Address → Page Table → Frame (if in memory) or


Disk (if page fault)

52
Advantages:
✅ Saves memory by loading only needed pages.
✅ Allows larger programs than physical memory.

Disadvantages:
❌ High initial page fault rate.
❌ Disk I/O overhead.

2 Performance of Demand Paging

Key Factors:
✅ Page Fault Rate (PFR):

 Fraction of references that cause a page fault.


 Ideally low (close to 0).

✅ Effective Access Time (EAT):

EAT =

(1−PFR)× MemoryAccessTime + (PFR) × PageFaultServiceTime

Memory Access Time: Time to access RAM.

Page Fault Service Time: Disk I/O + OS overhead.

Example:

Memory Access Time = 100 ns

Page Fault Service Time = 8 ms

PFR = 0.001

Then:

EAT=(1−0.001)×100ns+(0.001)×8ms

Observation:
Even a small PFR can greatly slow down the system!

53
3 Page Replacement Algorithms

Purpose:
When memory is full, OS must select a page to evict to make room for a
new page.

Common Algorithms:

Algorithm Description Pros Cons


Poor
performance;
Evicts oldest Simple to
FIFO might evict
page (first-in). implement.
heavily used
pages.
Impossible to
Replaces page
implement
Optimal that won’t be Best possible
perfectly; needs
(OPT) used for the performance.
future
longest time.
knowledge.
Evicts page that Needs hardware
LRU (Least Good
was least or software
Recently approximation
recently support to track
Used) to OPT.
accessed. usage.
Circular list
May approximate
Clock with a reference
Efficient to LRU but can
(Second bit; skips pages
implement. degrade under
Chance) with reference
heavy load.
bit=1.

Diagram (Clock Algorithm):

Pages in a circle → pointer moves clockwise, gives second


chances.

4 Thrashing

54
Definition:
Thrashing occurs when the system spends more time swapping pages in
and out than executing processes.

Causes:

 High degree of multiprogramming.


 Insufficient memory allocation per process.
 High page fault rate.

Consequences:
❌ CPU utilization drops.
❌ System throughput drops.

Solution:

 Reduce multiprogramming degree.


 Use working set model or page fault frequency algorithm to
adjust memory allocation.

5 Cache Memory Organization

Definition:
Cache is a small, fast memory between the CPU and main memory that
stores frequently used data and instructions.

Types:

1. Direct Mapped: Each memory block maps to one specific


cache line.
2. Fully Associative: Any memory block can go anywhere in
the cache.
3. Set Associative: A compromise between the two.

Diagram (Direct Mapped):

Main Memory Block → Cache Line (based on block address)

Importance:
✅ Reduces average memory access time.
✅ Exploits locality of reference (next topic).

55
6 Locality of Reference

Definition:
Programs tend to access a small portion of their address space
repeatedly over short periods of time.

Types:

1. Temporal Locality: Recently accessed memory locations are


likely to be accessed again soon.
2. Spatial Locality: Nearby memory addresses are likely to
be accessed soon.
Example:

Loops exhibit both temporal and spatial locality.

Importance:
✅ Justifies caching and paging.
✅ Helps optimize memory management.

Summary Table

Topic Key Points


Loads pages on demand, reduces
Demand Paging
memory use.
Performance of Demand Page Fault Rate, Effective Access
Paging Time (EAT).
Page Replacement
FIFO, Optimal, LRU, Clock.
Algorithms
Excessive paging, reduces
Thrashing
performance.
Fast, small, reduces average access
Cache Memory
time.
Locality of Reference Temporal and spatial locality.

56
 I/O MANAGEMENT AND DISK SCHEDULING

1 I/O Devices and I/O Subsystems

A. I/O Devices

I/O devices are hardware components that allow a computer to


communicate with the external environment.

Types of I/O Devices:

a) Input devices: Keyboard, mouse, scanner.


b) Output devices: Monitor, printer, speakers.
c) Storage devices: HDD, SSD, CD/DVD.

B. I/O Subsystems

The I/O subsystem is the OS component that manages all I/O operations
and devices.

Functions:
✅ Manages device drivers.
✅ Handles interrupts.
✅ Provides a buffer between CPU and device.
✅ Offers a uniform interface for devices.

Components:
Device Drivers

Software that translates OS commands to device-specific actions

Interrupt Handlers

Manage asynchronous device requests.

I/O Scheduling

Manages the order of device requests.

57
2 I/O Buffering

Definition:
Temporary storage area in memory that holds data during I/O transfers.

Types:

1. Single Buffering: One buffer per device.


2.Double Buffering: Two buffers alternate use, overlapping
computation and I/O.
3.Circular Buffering: Multiple buffers in a circular queue.

Advantages:
✅ Reduces CPU idle time.
✅ Improves device utilization.

Diagram:

+---------------------+
| Application |
| ↓ |
| OS Buffer → Device |
+---------------------+

3 Disk Storage

Definition:
Magnetic disks are the most common secondary storage devices. Data is
stored on rotating platters divided into tracks and sectors.

Key Terms:
✅ Track: Circular path on the surface.
✅ Sector: Subdivision of a track.
✅ Cylinder: Same track on all platters.
✅ Seek Time: Time to move head to the correct track.
✅ Rotational Latency: Time waiting for the sector to
rotate under the head.
✅ Transfer Time: Time to transfer data.

58
4 Disk Scheduling

Purpose:
Optimize disk I/O performance by selecting the order of disk requests.

Algorithms:

Algorithm Description Pros Cons


FCFS
(First-
Services requests in High seek
Come, Simple.
arrival order. time.
First-
Served)
SSTF
Chooses the request
(Shortest Reduces May cause
closest to the current
Seek Time seek time. starvation.
head position.
First)
May take
SCAN Moves the head back
Fairer longer for
(Elevator and forth servicing
than SSTF. far
Algorithm) requests.
requests.
Like SCAN but services Slightly
C-SCAN More
requests in one longer seek
(Circular uniform
direction only, then time on
SCAN) wait time.
jumps back. average.
Like SCAN/C-SCAN but
More
stops at last request Similar
LOOK/C-LOOK efficient
in direction before trade-offs.
than SCAN.
reversing/jumping.

Diagram (SCAN):

Disk Head: ↑ servicing requests → at end reverses ↓

5 RAID (Redundant Array of Independent Disks)

59
Definition:
Technique for improving disk performance and reliability using multiple
disks in parallel.

RAID Levels:

Level Description Features


RAID Striping (no High speed, no fault
0 redundancy) tolerance.
RAID Data copied on two disks
Mirroring
1 (redundancy).
RAID Block-level striping Good balance of speed and
5 with parity fault tolerance.
RAID Like RAID 5 but with Can tolerate 2 simultaneous
6 double parity disk failures.

Advantages:
✅ Improved performance (striping).
✅ Data redundancy (mirroring/parity).
✅ Fault tolerance (some levels).

Disadvantages:
❌ Extra cost (additional disks).
❌ Complex configuration.

Summary Table

Topic Key Points


I/O Devices Input, Output, Storage.
Device drivers, interrupt handlers, I/O
I/O Subsystems
scheduling.
I/O Buffering Single, double, circular buffering.
Tracks, sectors, cylinders, seek time,
Disk Storage
latency.
Disk
FCFS, SSTF, SCAN, C-SCAN, LOOK, C-LOOK.
Scheduling

60
Topic Key Points
Levels 0, 1, 5, 6; performance and
RAID
reliability.

FILE SYSTEM

1 File Concept

Definition:
A file is a collection of logically related data stored on secondary storage
(e.g., hard disk).

Attributes of a File:
✅ Name: Identifier for the file.
✅ Type: e.g., .txt, .doc, .jpg.
✅ Location: Pointer to the file on disk.
✅ Size: Current size of the file.
✅ Protection: Access permissions (read, write, execute).
✅ Time, date, user ID: For creation and modification
tracking.

File Operations:
✅ Create
✅ Write
✅ Read
✅ Reposition (seek)
✅ Delete
✅ Truncate

Example:
A .txt file storing a note.

2 File Organization and Access Mechanisms

File Organization:
How records are arranged within a file.

61
Types:

1. Sequential Access:

 Records stored one after another.


 Accessed in order (e.g., reading a tape).

2. Direct Access (Random Access):

 Records can be accessed directly using an address or key.


 Suitable for databases.

3. Indexed Access:

 Uses an index to locate records quickly.


 Combines fast access with efficient storage.

Example Table:

Organization Use Case Pros Cons


Simple,
Sequential Logs, backups No random access
efficient
Fast random
Direct Databases May waste space
access
Index
Libraries, large
Indexed Quick lookup maintenance
databases
overhead

3 File Directories

Purpose:
Organize files in a structured way to manage them efficiently.

Functions:
✅ Store file metadata (name, type, size, location, etc.).
✅ Help with file searching.
✅ Manage file hierarchies (folders and subfolders).

Directory Structures:
✅ Single-level Directory:

62
 All files in one directory.
 Simple but prone to name conflicts.

✅ Two-level Directory:

 Separate directory for each user.


 Reduces name conflicts, but no subfolders.

✅ Tree-structured Directory:

 Hierarchical folders and subfolders.


 Flexible and scalable.

✅ Acyclic Graph Directory:

 Allows shared subdirectories and files.


 Suitable for file sharing.

✅ General Graph Directory:

 Cycles allowed but needs cycle detection to avoid infinite loops.

4 File Sharing

Definition:
Allows multiple users/processes to access the same file concurrently.

Mechanisms:
✅ File locks to avoid conflicts.
✅ Access control lists to manage permissions.

Problems:
❌ Concurrency issues (e.g., two users editing the same file
simultaneously).
❌ Security risks if not managed properly.

5 File System Implementation Issues

Components:
✅ File Control Block (FCB): Stores file metadata like size, location,

63
permissions.
✅ Disk Layout:

 Boot Control Block: Info for booting OS.


 Volume Control Block: Info about the entire file system (e.g.,
size, free space).

✅ Free Space Management:

 Bitmaps: Each bit represents a block (1=free, 0=allocated).


 Linked lists: Chains of free blocks.
 Grouping: Stores addresses of free blocks in clusters.

Example:
Bitmap-based free space management in Linux ext2.

6 File System Protection and Security

Purpose:
Protect files from unauthorized access, modification, or deletion.

Mechanisms:
✅ Access Control Lists (ACLs): Define permissions for
users/groups.
✅ Password Protection: Individual file passwords (less common
today).
✅ Encryption: Encrypting file contents for confidentiality.

Types of Access Permissions:


✅ Read
✅ Write
✅ Execute
✅ Delete

Example (UNIX permissions):


rwx for Owner, Group, Others (e.g., -rwxr-xr--).

Summary Table

Topic Key Points

64
Topic Key Points
File Concept Attributes, operations.
File Organization Sequential, Direct, Indexed.
File Directories Single, two-level, tree, graph.
File Sharing Concurrency, locks, ACLs.
Implementation FCB, disk layout, free space
Issues management.
Protection &
ACLs, encryption, UNIX permissions.
Security

 OPERATING SYSTEM — UNIT-WISE DETAILED NOTES

UNIT = 1

1 Introduction to Operating System and its Functions

Definition:
An Operating System (OS) is system software that acts as an
intermediary between users and computer hardware.
Functions:

1. Process Management
2. Memory Management
3. File System Management
4. Device Management
5. Security and Protection
6. Networking
7. Command Interpretation

2 Classification of Operating Systems

Type Description
Jobs with similar needs are grouped; no
Batch OS
user interaction during execution.

65
Type Description
User interacts directly (e.g., command-
Interactive OS
line, GUI).
CPU time is divided among users (e.g.,
Time Sharing OS
UNIX).
Strict timing constraints; immediate
Real-Time OS
response (e.g., avionics).
Multiprocessor
More than one CPU; parallel processing.
OS
Multiple users access simultaneously (e.g.,
Multiuser OS
Linux).
Multiprocess OS Multiple processes run concurrently.
Multithreaded
Supports multiple threads within a process.
OS

3 Operating System Structure

Layered Structure:
Hierarchical layers (User interface → Kernel → Hardware).

System Components:

✅ Process management
✅ Memory management
✅ File system
✅ I/O system
✅ Protection and security

OS Services:
Program execution, I/O, file management, error detection, etc.

4 Kernel Types

Type Description
Reentrant Supports multiple processes
Kernel executing kernel code

66
Type Description
simultaneously.
Monolithic All OS services are in one large
Kernel block of code (e.g., UNIX).
Microkernel

UNIT = 2

1 Concurrent Processes

Process Concept: Program in execution.


Principle of Concurrency: Multiple processes can progress
simultaneously.

2 Classical Problems

✅ Producer/Consumer Problem — Bounded buffer, synchronization


needed.
✅ Mutual Exclusion & Critical Section: Only one process in the
critical section at a time.

✅ Solutions:

a) Dekker’s Solution: Alternating flags.


b) Peterson’s Solution: Flags and turn variable.
c) Semaphores: Wait and signal operations.
d) Test-and-Set: Hardware-supported lock.

3 Other Classical Problems

✅ Dining Philosophers: Resource allocation without


deadlock/starvation.
✅ Sleeping Barber: Synchronization with sleeping and waking.

4 IPC Models

✅ Shared Memory

✅ Message Passing

67
5 Process Generation

✅ fork(), exec() — Process creation in UNIX-like systems.

UNIT = 3

1 CPU Scheduling

Concepts: Maximize CPU utilization and throughput, minimize


waiting and response time.
Performance Criteria: CPU utilization, throughput, turnaround time,
waiting time, response time.
Process States: New, Ready, Running, Waiting, Terminated.
Process Control Block (PCB): Process ID, state, PC, registers,
memory info.
Scheduling Algorithms: FCFS, SJF, Priority, Round Robin,
Multilevel Queue.
Multiprocessor Scheduling: Load balancing among processors.

2 Deadlock

System Model: Resource allocation graph.


Characterization: Mutual exclusion, hold and wait, no preemption,
circular wait.
Prevention/Avoidance/Detection/Recovery: Deadlock avoidance
(Banker’s algorithm), detection with resource graph, and recovery
(resource preemption, process termination).

UNIT = 4

1 Memory Management

Basic Bare Machine: No OS support.


Resident Monitor: OS always in memory.

68
Multiprogramming: Fixed and variable partitions.
Protection Schemes: Bound registers, relocation.
Paging & Segmentation: Memory divided into frames/pages or
segments.
Paged Segmentation: Combines paging and segmentation.
Virtual Memory: Demand paging, page replacement algorithms
(FIFO, LRU, Optimal), thrashing.
Cache Memory: Fast memory, uses locality of reference.

UNIT = 5

1 I/O Management

I/O Devices & Subsystems: Disk, keyboard, etc.


I/O Buffering: Single, double, circular buffering.
Disk Scheduling: FCFS, SSTF, SCAN, LOOK, C-SCAN.
RAID: Levels 0–6, redundancy and performance.

2 File System

File Concept: Name, type, size, location, protection.


File Organization & Access: Sequential, direct, indexed.
File Directories: Single-level, two-level, tree-structured, acyclic graph.
File Sharing: Concurrency, locks.
Implementation Issues: File Control Block, free space management.
Protection and Security: ACLs, encryption, UNIX permissions.

69

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy