0% found this document useful (0 votes)
5 views32 pages

OS Notes by Piyush Slashbyte

The document provides comprehensive notes on operating systems, detailing their functions, types, and key concepts such as processes, threads, and inter-process communication. It discusses various operating system types, including batch, multi-tasking, and real-time systems, along with synchronization methods like semaphores and mutexes. Additionally, it covers critical topics like deadlock, spooling, and the Banker's Algorithm for resource management.

Uploaded by

Priya dharshini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views32 pages

OS Notes by Piyush Slashbyte

The document provides comprehensive notes on operating systems, detailing their functions, types, and key concepts such as processes, threads, and inter-process communication. It discusses various operating system types, including batch, multi-tasking, and real-time systems, along with synchronization methods like semaphores and mutexes. Additionally, it covers critical topics like deadlock, spooling, and the Banker's Algorithm for resource management.

Uploaded by

Priya dharshini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

COMPLETE NOTES

SLASH BY TE
SSC-BA NK
NOTES

SLASHBYTE | SSC-BANK
1 |
Infra Support (OS)
An Operating System can be defined as an interface between user and hardware. It is
responsible for the execution of all the processes, Resource Allocation, CPU management, File
Management and many other tasks.
The purpose of an operating system is to provide an environment in which a user can execute
programs in convenient and efficient manner.

What does an Operating system do?


1. Process Management

2. Process Synchronization

3. Memory Management

4. CPU Scheduling

5. File Management

6. Security

Types of Operating Systems (OS)

Infra Support (OS) 1


Type of Operating
Description Example Type Combination
System

Processes a batch of
Simple Batch Operating jobs sequentially without
IBM 7090 -
System user interaction during
execution.

Running multiple
programs on a single
Multi-programming Batch + Multi-
CPU, where the CPU IBM OS/360
Batch OS programming
switches between tasks
to maximize CPU usage.

Allows multiple tasks to


Multi-tasking Operating run concurrently by Microsoft Windows, Multi-user + Multi-
System switching the CPU macOS tasking
rapidly between them.

Uses multiple
processors for executing
Multi-processing Multi-tasking + Multi-
tasks concurrently, UNIX, Linux
Operating System processing
improving speed and
reliability.

Provides time slots to


Time Sharing Operating multiple users to enable Multi-user + Time-
Cloud servers
System interaction with the sharing
system simultaneously.

Tasks are completed


within a specified
Real-Time Operating Dedicated to real-time
deadline, making them VxWorks, FreeRTOS
System (RTOS) tasks
essential for applications
where timing is critical.

Infra Support (OS) 2


Manages multiple
Distributed Operating connected computers to Apache Hadoop, Distributed + Network +
System perform tasks as a Amoeba OS Multi-tasking
single system.

Manages
Network Operating communication and Windows Server, Novell
Networking capabilities
System resource sharing in a NetWare
networked environment.

A versatile OS
supporting multiple Multi-user + Multi-
UNIX UNIX, macOS, Solaris
users, multi-tasking, and tasking + Time-sharing
time-sharing features.

A UNIX-like OS offering
flexibility, multi-user Multi-user + Multi-
Linux support, multi-tasking, Ubuntu, Red Hat, Debian tasking + Time-sharing
time-sharing, and robust + Networking
networking capabilities.

Explanation for Linux:


Multi-user: Supports multiple users logging in and working on the same system
simultaneously.

Multi-tasking: Runs multiple processes or tasks concurrently.

Time-sharing: Ensures equitable distribution of CPU time across processes and users.

Networking: Provides robust tools and protocols for communication between devices.

Multi-processing: Fully utilizes multiple cores or processors for enhanced performance.

Key Concepts in Operating Systems


1. System Calls:
Definition: A system call is a mechanism that allows user-level programs to request
services from the operating system kernel. It serves as an interface between a running
program and the OS.

Examples: File operations (open, read, write), process control (fork, exec), memory
management (malloc), etc.

File operations: open(), read(), write(), close()

Process control: fork(), exec(), wait(), exit()

Communication: pipe(), socket()

2. Processes:
Definition: A process is an instance of a program that is currently being executed. It
contains its own address space, code, data, and system resources like open files.

Infra Support (OS) 3


States: A process can be in various states like New, Ready, Running, Waiting, and
Terminated.

Process Control Block (PCB): Stores information about the process, such as process ID,
state, program counter, and memory management information.

Attributes of a process

The Attributes of the process are used by the Operating System to create the process control
block (PCB) for each of them. This is also called context of the process. Attributes which are
stored in the PCB are described below.

S.No. Attribute Description

A unique ID assigned to each process


1 Process ID
for identification in the system.

Stores the address of the last instruction


Program
2 executed before the process was
Counter
suspended. Used to resume execution.

The process goes through various


3 Process State states: new, ready, running, and waiting,
from creation to completion.

Every process has a priority; the


4 Priority process with the highest priority is given
CPU time first.

General
Holds data generated during the
5 Purpose
execution of the process.
Registers

List of Open The OS maintains a list of all open files


6
Files used by the process during execution.

List of Open The OS maintains a list of all devices


7
Devices used by the process during execution.

3. Threads:
Definition: A thread is the smallest unit of execution within a process. Multiple threads can
exist within a single process and share the same resources (memory, file descriptors) but
execute independently.

Types:

User-level threads: Managed by user-level libraries.

Kernel-level threads: Managed directly by the operating system.

Benefits: Threads allow for better resource utilization and faster context switching than
processes.

4. Inter-Process Communication (IPC):


Definition: IPC is a mechanism that allows processes to communicate with each other and
synchronize their actions.

Infra Support (OS) 4


Approaches to Interprocess Communication

These are a few different approaches for Inter- Process Communication:

1. Pipes

2. Shared Memory

3. Message Queue

4. Direct Communication

5. Indirect communication

6. Message Passing

7. FIFO

To understand them in more detail, we will discuss each of them individually.

Pipe:-

The pipe is a type of data channel that is unidirectional in nature. It means that the data in this
type of data channel can be moved in only a single direction at a time.

Shared Memory:-

It can be referred to as a type of memory that can be used or accessed by multiple processes
simultaneously. It is primarily used so that the processes can communicate with each other.

Message Queue:-

In general, several different messages are allowed to read and write the data to the message
queue. In the message queue, the messages are stored or stay in the queue unless their
recipients retrieve them.

Infra Support (OS) 5


Message Passing:-

It is a type of mechanism that allows processes to synchronize and communicate with each
other. However, by using the message passing, the processes can communicate with each
other without restoring the hared variables.
Usually, the inter-process communication mechanism provides two operations that are as
follows:

send (message)

received (message)

Note: The size of the message can be fixed or variable.


Direct Communication:-

In this type of communication process, usually, a link is created or established between two
communicating processes. However, in every pair of communicating processes, only one link
can exist.

Indirect Communication

Indirect communication can only exist or be established when processes share a common
mailbox, and each pair of these processes shares multiple communication links. These shared
links can be unidirectional or bi-directional.

FIFO:-
It is a type of general communication between two unrelated processes. It can also be
considered as full-duplex, which means that one process can communicate with another
process and vice versa.

Role of Synchronization in Inter Process Communication


These are the following methods that used to provide the synchronization:

1. Mutual Exclusion

2. Semaphore

Infra Support (OS) 6


3. Barrier

4. Spinlock

Here’s a table with explanations of Mutual Exclusion, Semaphore, Barrier, and Spinlock:

Concept Definition Purpose Example

To avoid race conditions A bank account system where


Ensures that only one
Mutual by allowing only one only one thread can update
process or thread can enter
Exclusion thread to access shared the balance at a time to
the critical section at a time.
resources at a time. prevent inconsistent data.

A synchronization primitive
A printer pool where only 3
that controls access to To manage access to a
printers are available, and a
shared resources using a limited number of
Semaphore counting semaphore controls
counter. There are two resources by multiple
how many threads can use a
types: binary and counting threads or processes.
printer at a time.
semaphores.

A synchronization
To synchronize the In parallel processing, threads
mechanism that prevents
execution of multiple perform steps in stages, and
processes or threads from
Barrier threads, ensuring all they all must reach a "barrier"
proceeding until all involved
reach a common point before moving to the next
processes reach a certain
before continuing. stage.
point.

A type of lock where a A thread repeatedly checks a


To control access to a
process repeatedly checks lock variable to see if it's free,
critical section by
if a resource is available, then acquires it when
Spinlock making a thread
often in a loop, without available, avoiding context
continuously check if
performing any other switching but consuming CPU
the lock is available.
operation (busy waiting). time during waiting.

Key Points:
Mutual Exclusion ensures only one process can access critical sections at a time.

Semaphore manages access to resources using a counter, with binary or counting types.

Barrier synchronizes multiple threads to reach a common point before continuing.

Spinlock is a lock type where the process stays active, repeatedly checking for lock
availability.

5. Concurrency and Synchronization:


Concurrency is about managing multiple tasks at the same time, allowing them to
progress without necessarily executing simultaneously (e.g., running multiple programs or
threads).

Problems in Concurrency:

Race conditions: When two or more threads/processes access shared data


concurrently, leading to unpredictable results.

Infra Support (OS) 7


Critical section: A portion of code that accesses shared resources and needs to be
executed by only one thread at a time.

Synchronization is about coordinating tasks to ensure correct access to shared


resources, preventing conflicts and ensuring consistency (e.g., using locks to prevent
multiple threads from modifying the same data). Mechanisms (like mutexes and
semaphores) to control access to shared resources to prevent data inconsistency.

Monitor: It is a synchronization construct that ensures mutual exclusion and allows


processes to wait for specific conditions while managing access to shared resources in
concurrent programming.

Mutex is used for mutual exclusion where only one thread can access a critical section
at a time, and it enforces ownership.

Semaphore can be used to allow multiple threads to access a resource simultaneously


(in the case of counting semaphores) or for mutual exclusion (in the case of binary
semaphores).

Binary Semaphore: A semaphore with only two values (0 or 1), used for mutual
exclusion to allow only one thread to access a resource at a time.

Counting Semaphore: A semaphore with integer values, used to manage access to


a pool of resources, allowing multiple threads to access a limited number of
resources.

Key Differences Between Binary and Counting Semaphore:


Feature Binary Semaphore Counting Semaphore

Value 0 or 1 Any non-negative integer

To manage mutual exclusion


Purpose To manage multiple resources
(like a lock)

Only one resource can be Multiple resources can be


Resource Management
accessed at a time accessed simultaneously

Single resource
Managing a pool of resources
Usage synchronization (e.g., mutual
(e.g., printers, buffers)
exclusion)

Managing multiple identical


Example Mutex (mutual exclusion) resources like printers, slots,
etc.

Semaphore Operations:

Operation Description Effect Example

A process calls wait()


Also called P or Down. Decreases the
when a printer is
It requests or acquires a semaphore value. If
needed. If all printers
resource by decreasing it's 0, the process is
wait() are in use (semaphore
the semaphore's value. blocked until the
value = 0), the process
If the value is 0, the value is greater than
waits until a printer
process gets blocked. 0.
becomes available.

Infra Support (OS) 8


Also called V or Up. It After a process finishes
Increases the
releases or signals the printing, it calls
semaphore value. If
availability of a resource signal() , increasing
any processes are
signal() by increasing the the semaphore value
blocked, one of them
semaphore's value, and signaling that a
may be woken up to
possibly waking up printer is available for
proceed.
blocked processes. other processes.

6. Deadlock:
Definition: Deadlock occurs when a set of processes are blocked because each process is
holding a resource and waiting for another resource held by a different process.

Conditions (necessary for deadlock):

Mutual exclusion: Resources cannot be shared.

Hold and wait: A process holding one resource is waiting for others.

No preemption: Resources cannot be forcibly taken from processes.

Circular wait: A circular chain of processes exists where each process is waiting for a
resource held by the next process.

Strategies Handling :
Strategy Description Pros Cons

OS ignores deadlock,
Deadlock Ignorance Simple, focuses on Deadlocks may occur,
assuming it doesn't
(Ostrich Algorithm) performance. requiring manual restart.
happen.

Prevents deadlock by
violating one of the four Prevents deadlock from May reduce system
Deadlock Prevention
necessary conditions occurring. performance.
(mutual exclusion, etc.).

OS checks if the system


Deadlock Avoidance Actively prevents Can slow down system
is in a safe state before
(Banker’s Algorithm) deadlock. due to constant checks.
allocating resources.

Deadlock Detection & OS detects deadlock Detects and handles Causes downtime or
Recovery after it occurs and takes deadlock when it performance issues for
action (e.g., process occurs. recovery.

Infra Support (OS) 9


termination, resource
preemption).

Deadlock Prevention: Techniques like resource allocation graphs, and methods like
resource ordering or timeout mechanisms can be used to avoid or detect deadlock.

Spooling
Spooling is a process where data is temporarily stored in a queue or buffer to be processed
later, typically by slower devices like printers. Multiple tasks can send their data to the spool,
which holds it until the device is ready to process it. This helps manage device access, prevent
delays, and enable multitasking.

Although, Spooling can be an effective approach to violate mutual exclusion but it suffers from
two kinds of problems.

1. This cannot be applied to every resource.

2. After some point of time, there may arise a race condition between the processes to get
space in that spool.

Example:
In printing, print jobs are spooled, meaning they are stored in a queue and printed one after
another as the printer becomes available.

Infra Support (OS) 10


Benefits:
Efficient device management.

Prevents delays and allows multitasking.

Banker’s Algorithm:
Real-Life Example of the Banker's Algorithm:
Imagine a bank with a limited amount of money (resources), and several customers
(processes) who want to withdraw money (request resources). The bank must ensure that it
always has enough money to meet future withdrawal requests without running out of funds
(deadlock).

Scenario:
Bank's Total Resources (Money): ₹1000

Customers' Requests:

Customer 1: Wants ₹500

Customer 2: Wants ₹300

Customer 3: Wants ₹200

Banker's Algorithm Process:


1. Customer 1 requests ₹500.

The bank checks if this request is less than or equal to the total money available.

The request is valid, and the bank gives ₹500 to Customer 1.

2. Customer 2 requests ₹300.

The bank checks if ₹300 is less than or equal to the remaining ₹500.

The request is valid, and the bank gives ₹300 to Customer 2.

3. Customer 3 requests ₹200.

The bank checks if ₹200 is less than or equal to the remaining ₹200.

The request is valid, and the bank gives ₹200 to Customer 3.

At every step, the bank ensures that after allocating the money, it still has enough funds to
cover all future withdrawals. The process continues in a way that ensures no customer will be
left waiting forever or cause a shortage of funds.

This is how the Banker's Algorithm works: it checks if resources can be allocated safely
(without running out) before giving them to customers, preventing deadlock.

7. Memory Management:
Definition: Memory management is the process by which an OS manages physical and
virtual memory. It ensures that processes have enough memory to execute and protects
the memory space of one process from another.

Infra Support (OS) 11


Key Techniques:

Paging: Divides physical memory into fixed-size blocks and maps them to logical
memory blocks.

Segmentation: Divides memory into segments based on logical divisions like code,
data, etc.

Memory Allocation: Includes strategies like First Fit, Best Fit, and Worst Fit for
assigning memory to processes.

Garbage Collection: Automatic recycling of memory that is no longer used.

8. Virtual Memory:
Definition: Virtual memory allows the system to use more memory than what is physically
available by using disk space as an extension of RAM. It gives the illusion of a larger main
memory.

Components:

Page Table: Maps virtual addresses to physical addresses.

Swapping: Involves moving data between RAM and disk (swap space) to manage
memory effectively.

Advantages: Provides processes with the illusion of a large, contiguous memory space
and isolates processes from each other’s memory.

Process Management in OS
A Program does nothing unless its instructions are executed by a CPU. A program in execution
is called a process. In order to accomplish its task, process needs the computer resources.

1. Scheduling processes and threads on the CPUs.

2. Creating and deleting both user and system processes.

3. Suspending and resuming processes.

4. Providing mechanisms for process synchronization.

5. Providing mechanisms for process communication.

Process States
PCB: Process Control Block- Every process has a PCB. A PCB is a data structure. It is a big look
up table. PCB is also called as Process Descriptor.

Infra Support (OS) 12


State Diagram

Here’s the information formatted in a table:

State Description Key Points

A program waiting to be picked The initial state of a process


New up by the OS and loaded into before it is moved to the ready
the main memory. state.

Processes in the main memory Multiple processes can be in


Ready
waiting for CPU assignment. the ready state.

- Only one process can run per


CPU at a time (for a single-core
The process currently being
Running system). - If there are n
executed by the CPU.
processors, n processes can
run.

Infra Support (OS) 13


The process waits for a specific
A process waiting for a
resource to continue execution,
Block or Wait resource or input, moved from
allowing the CPU to be assigned
the running state.
to another process.

A process that has finished All associated resources,


Completion/Termination execution and is removed by including the Process Control
the OS. Block (PCB), are deleted.

- Happens when the main


A ready process moved to
memory is full.
secondary memory due to
Suspend Ready - Typically, lower-priority
insufficient resources in the
processes are moved to make
main memory.
room for higher-priority ones.

- Happens when resources are


A blocked process moved to scarce.
Suspend Wait secondary memory to free up - The process remains in the
space in the main memory. wait state until resources and
main memory are available.

Process Scheduling in OS (Operating System)


1. Long term scheduler
Long term scheduler is also known as job scheduler. The time that each process spends being
in the ready queue. It is quite long.

2. Short term scheduler


Short term scheduler is also known as CPU scheduler. The time that each process gets an the
CPU is quite small.

3. Medium term scheduler


Medium term scheduler takes care of the swapped out processes. It helps to move from ready
state to the suspend state.

Process Queues

Infra Support (OS) 14


1. Job Queue
In starting, all the processes get stored in the job queue. It is maintained in the secondary
memory. The long term scheduler (Job scheduler) picks some of the jobs and put them in the
primary memory.

2. Ready Queue
Ready queue is maintained in primary memory. The short term scheduler picks the job from the
ready queue and dispatch to the CPU for the execution.

3. Waiting Queue
When the process needs some IO operation in order to complete its execution, OS changes the
state of the process from running to waiting. The context (PCB) associated with the process
gets stored on the waiting queue which will be used by the Processor when the process
finishes the IO.

Various Times related to the Process

Waiting Time Burst Time


Arrival Completion
Time Time

Ready Run Comple te

CT - AT = WT + BT

TAT = CT - AT

Waiting Time = TAT - BT

TAT Turn around time


BT Burst time
AT Arrival time

1. Arrival Time
The time at which the process enters into the ready queue is called the arrival time.

2. Burst Time
The total amount of time required by the CPU to execute the whole process is called the Burst
Time. This does not include the waiting time. It is confusing to calculate the execution time for

Infra Support (OS) 15


a process even before executing it hence the scheduling problems based on the burst time
cannot be implemented in reality.

3. Completion Time
The Time at which the process enters into the completion state or the time at which the
process completes its execution, is called completion time.

4. Turnaround time
The total amount of time spent by the process from its arrival to its completion, is called
Turnaround time.

5. Waiting Time
The Total amount of time for which the process waits for the CPU to be assigned is called
waiting time.

6. Response Time
The difference between the arrival time and the time at which the process first gets the CPU is
called Response Time.

7. Throughput:
The amount of work that is completed per unit time. STS wants high throughput.
T=n/L

n=no of processes L= Schedule Length


L= Max(Completion Time)- Min(Arrival Time)

CPU Scheduling
In uniprogramming systems, the CPU remains idle during I/O operations, wasting time. In
contrast, multiprogramming systems use CPU scheduling and context switching to keep the
CPU busy by switching between processes in the ready queue.
[Context switching occurs when the CPU saves the current process's state (PCB) and loads
another process's state, enabling multitasking without losing process progress.]

Scheduling Algorithms in OS (Operating System)


The Purpose of a Scheduling algorithm
1. Maximum CPU utilization

2. Fare allocation of CPU

3. Maximum throughput

4. Minimum turnaround time

5. Minimum waiting time

Infra Support (OS) 16


6. Minimum response time

Scheduling
Description Advantages Disadvantages
Algorithm

Can lead to convoy


First-Come, First- Processes are scheduled in effect, where short
Simple to implement.
Served (FCFS) the order they arrive. processes are delayed by
long ones.

The process with the


Difficult to predict burst
Shortest Job First shortest burst time Minimizes average waiting
times; may lead to
(SJF) (execution time) is time.
starvation of longer jobs.
scheduled first.

Each process is assigned a Easy to implement and


Starvation can occur if
Priority priority; the one with the can be adjusted to give
lower priority processes
Scheduling highest priority is executed higher priority to
never get executed.
first. important tasks.

Can lead to high


Each process is assigned a
Fair and simple; good for turnaround time,
Round Robin (RR) fixed time slice (quantum);
time-sharing systems. especially with large time
executed in a circular order.
slices.

Processes are divided into Allows for a mix of Complex to manage and
Multilevel Queue different priority queues; scheduling algorithms can lead to issues if
Scheduling each queue has its own (e.g., FCFS for foreground, processes are stuck in the
scheduling algorithm. RR for background). wrong queue.

Similar to multilevel queue,


More dynamic and
Multilevel but processes can move Complex to implement
responsive to changing
Feedback Queue between queues based on and configure.
process characteristics.
their behavior.

FCFS :
The
convoy effect refers to a situation in which processes or threads in a system are delayed or
slowed down because they must wait for other processes that are currently holding resources.
This creates a "chain" of delays, as slower or blocked processes prevent others from making
progress, leading to inefficiency
Shortest Job First (SJF):

It is particularly efficient when job lengths are similar because it minimizes average waiting
time by executing the shortest jobs first. If job lengths vary significantly, it may lead to longer
jobs being delayed, causing starvation of those processes.

Infra Support (OS) 17


Infra Support (OS) 18
CamScanner 11-17-2024 11.01.pdf

Starvation in operating systems refers to a situation where a process is perpetually delayed


from getting the CPU or other resources because higher-priority or more favorable processes
are always scheduled first.

Resource Allocation Graph


The resource allocation graph is the pictorial representation of the state of a system.

Memory Management
Memory Management refers to the process of efficiently managing a computer's memory
resources, ensuring that programs and processes have access to the memory they need while
optimizing the system's overall performance.

Infra Support (OS) 19


Type of
Description Speed Capacity Cost Example
Memory

A small, high-
speed memory
located close to Small (KB to CPU cache,
Cache Memory Very Fast Expensive
the CPU, stores MB) L1/L2 cache
frequently
accessed data.

Primary
memory used
Main Memory by the CPU to DRAM, SRAM,
Fast Moderate (GB) Less Expensive
(RAM) store running DDR4 RAM
processes and
data.

Permanent
Hard Drives,
Secondary storage used
Slow Large (TB) Cheap SSDs, Optical
Storage for long-term
Disks
data storage.

Key Differences:
Speed: Cache is the fastest, followed by main memory, with secondary storage being the
slowest. Main cache disadvantage: limited size.

Capacity: Secondary storage offers the largest capacity, while cache memory is the
smallest.

Cost: Cache memory is the most expensive per GB, followed by main memory, and
secondary storage is the cheapest.

Partitioning
It is one of the strategies used to implement memory management. It divides the system's
memory into blocks or partitions, making it easier to allocate and manage memory for
processes. The way memory is partitioned affects how well memory management works.

How Partitioning Supports Memory Management:


Fixed Partitioning:

Memory is pre-divided into fixed-sized blocks. Memory management ensures that


processes are allocated to these partitions. The challenge here is that fixed partitioning
can lead to internal fragmentation (unused memory within a partition), but it's easier to
manage.

Dynamic Partitioning:

Memory is divided into partitions of varying sizes as needed, based on the size of the
processes. This helps reduce internal fragmentation, but can cause external
fragmentation (unused memory scattered between allocations). Memory management
includes techniques like compaction to solve fragmentation problems.

Infra Support (OS) 20


Virtual Memory:

Partitioning is used in virtual memory systems where physical memory (RAM) is


divided into pages or segments, and the operating system manages the mapping of
virtual memory to physical memory

In paging, the smallest unit of virtual memory (the address space of a process) is
called a page.

The smallest unit of physical memory (actual RAM) is called a frame.

Fragmentation:
Type of
Description Where it Occurs Example Solution
Fragmentation

Wasted space A process - Compaction (for


within a partition Fixed allocated 4KB, but dynamic partitioning)
Internal
due to unused partitioning, only uses 3KB, - Memory pooling
Fragmentation
memory in fixed- paging leaving 1KB - Reducing partition
size allocations. wasted. sizes

Wasted space
Small free memory - Compaction (move
between Dynamic
blocks scattered memory to make
External allocations, making partitioning,
(1KB, 2KB), unable contiguous space)
Fragmentation it hard to find variable memory
to allocate a 4KB - Paging/Segmentation
contiguous free allocation
process. - Memory pooling
memory blocks.

Paging:
Definition: A memory management scheme that divides memory into fixed-size units
called pages. This eliminates the need for contiguous memory allocation.

Purpose: Helps manage virtual memory efficiently by breaking memory into smaller
chunks, allowing non-contiguous allocation.

Page Replacement Algorithms:


When memory is full and a new page needs to be loaded, the system must decide which page
to replace. Here are some common page replacement algorithms:

Page Replacement
Description Advantages Disadvantages
Algorithm

Can lead to poor


performance (not
Replaces the oldest always optimal); may
page in memory (the cause Belady's
First-In, First-Out (FIFO) Simple to implement.
one that has been in anomaly, where
memory the longest). increasing the number
of frames increases
page faults.

Optimal (OPT) Replaces the page that Provides the minimum Not feasible in practice
will not be used for the number of page faults because it requires

Infra Support (OS) 21


longest period of time in (theoretical optimal). future knowledge of
the future. page references.

Requires keeping track


Replaces the page that Efficient and closely
Least Recently Used of access history, which
has not been used for approximates the
(LRU) can be complex and
the longest time. optimal algorithm.
may add overhead.

Can suffer from issues


Good for systems where
Replaces the page that when pages are
Least Frequently Used frequently used pages
has been accessed the accessed once but are
(LFU) should remain in
least number of times. still retained due to low
memory.
access frequency.

Not ideal for general use


Simple to implement,
Most Recently Used Replaces the most as it tends to replace
useful in some specific
(MRU) recently accessed page. useful pages that were
contexts.
just recently used.

Uses a circular queue


and gives pages a
second chance before
replacing them. Each More efficient than FIFO; Still does not always
Clock (Second-Chance) page has a reference bit does not require perform as well as LRU
(1 or 0), and if a page is complex data structures. in all cases.
referenced (bit = 1), it
gets a second chance;
otherwise, it is replaced.

Thrashing is a situation in which the operating system spends a significant amount of time
swapping data between main memory (RAM) and secondary storage (e.g., hard disk or swap
space), instead of executing the actual processes. This happens when the system’s physical
memory is so over-committed that it cannot keep up with the demands of running processes,
leading to excessive page faults and swapping.

Partitioning and Segmentation:


Aspect Partitioning Segmentation

Dividing memory into variable-


Dividing memory into fixed or
Definition sized segments based on logical
dynamic blocks.
divisions.

Allocates contiguous blocks of Allocates variable-sized blocks


Memory Allocation
memory. based on the program's needs.

Can lead to internal


fragmentation (if fixed partitions Mainly causes external
Fragmentation are used) or external fragmentation, as segments vary
fragmentation (if dynamic in size.
partitions are used).

Less flexible as memory is pre- More flexible as it divides memory


Flexibility divided into fixed or dynamic based on the program's logical
blocks. structure.

Infra Support (OS) 22


Used in systems with simpler
Used when processes require
memory management needs
Use Case logical divisions like code, data,
(e.g., fixed-size memory for each
stack, etc.
process).

Fixed Partitioning in older OS, or Memory management in


Example Dynamic Partitioning in some OS segmented systems like certain
for process allocation. early microprocessors.

Paging vs Memory Pooling


Aspect Paging Memory Pooling

Divides memory into fixed-size Pre-allocates memory in fixed-


Definition
pages. size blocks.

Manages memory for objects of


Manages virtual memory and
Purpose similar size, reducing internal
reduces external fragmentation.
fragmentation.

Reduces external fragmentation Reduces internal fragmentation,


Fragmentation but may cause internal but may cause external
fragmentation. fragmentation.

Fast allocation for objects of the


Virtual memory systems (e.g.,
Use Case same size (e.g., games, real-time
operating systems).
systems).

Efficient for handling large


Efficient for managing uniform-
Efficiency memory with complex
sized memory blocks.
applications.

Paging is primarily used in virtual memory management, while memory pooling is used to
efficiently allocate memory for uniform objects.

Basics on Virtual Machines, Storage Solutions,


Networking Components, and Infrastructure-
related Concepts:
Table 1: Virtual Machines (VMs)
Concept Details

VMs simulate physical computers, allowing multiple


Definition OS to run on one machine. They offer isolation,
flexibility, and better resource utilization.

Benefits - Isolation: VMs are independent; one crash doesn't


affect others.
- Resource Utilization: Multiple VMs on one host
optimize hardware.
- Flexibility: Can be easily created, cloned, or
deleted.

Infra Support (OS) 23


- Disaster Recovery: Snapshots can be used to
revert to a previous state.

- Type 1 (Bare Metal): Directly runs on hardware


Hypervisor Types
(e.g., VMware ESXi, Microsoft Hyper-V).
(software that creates and manages virtual
- Type 2 (Hosted): Runs atop an existing OS (e.g.,
machines (VMs) on a physical host machine)
Oracle VirtualBox, VMware Workstation).

Table 2: Storage Solutions


Type Description Advantages Disadvantages

Mechanical storage - Slower read/write


- Cost-effective. - Large
HDD (Hard Disk Drive) device with spinning speeds.<br> - Prone to
storage capacity.
disks. physical damage.

- Faster read/write
- Expensive.
speeds.
Flash memory with no - Lower storage
SSD (Solid State Drive) - More durable.
moving parts. capacity compared to
- Lower power
HDD.
consumption.

- Centralized data
- Slower data access
management.
NAS (Network Attached Storage device compared to SAN.
- Scalable storage.
Storage) connected to a network. - May require additional
- Easy file sharing
configuration.
across devices.

- Fast data access. - Expensive.


SAN (Storage Area High-speed network for
- Suitable for large - Complex to set up and
Network) block-level data storage.
enterprises. manage.

Online data storage - Scalable. - Requires internet


Cloud Storage accessible via the - Backup solutions. connection.
internet. - Collaboration tools. - Potential security risks.

Table 3: Networking Components

Component Description Function Example Devices

A device that routes Directs traffic, connects


Cisco Router, Linksys
Router data packets between networks, provides
Router
networks. internet access.

Operates at the data link


Device that connects
layer, forwards data to Netgear Switch, Cisco
Switch devices within the same
specific devices using Catalyst
network.
MAC addresses.

Security device that Protects the network


Cisco ASA Firewall,
Firewall monitors and controls from unauthorized
pfSense Firewall
network traffic. access.

Device allowing wireless Extends network


TP-Link Access Point,
Access Point devices to connect to coverage, providing Wi-
Ubiquiti AP
the network. Fi.

Infra Support (OS) 24


Converts digital signals
Provides internet access
from your ISP to analog Motorola Surfboard,
Modem by connecting to the
signals for transmission Netgear Modem
ISP’s network.
over telephone lines.

Ensures even
Distributes incoming
distribution of traffic to
Load Balancer network traffic across AWS ELB, F5 Big-IP
maintain performance
multiple servers.
and uptime.

Comparison between Router, Switch, Bridge, and


Hub:
Device Function Layer Data Handling Example

Routes data Connects a local


Router between different Layer 3 (Network) Uses IP addresses network to the
networks internet

Connecting
Connects devices Uses MAC devices like
Switch Layer 2 (Data Link)
within a network addresses computers and
printers in a LAN

Connects two Connecting two


Filters traffic based
Bridge networks, reduces Layer 2 (Data Link) LANs to reduce
on MAC
collisions network traffic

Older networks,
Broadcasts data to
Sends data to all connects multiple
Hub all devices in a Layer 1 (Physical)
ports devices in a small
network
network

Key Differences:
Router: Routes between networks using IP addresses.

Switch: Connects devices within a network using MAC addresses.

Bridge: Reduces traffic between networks.

Hub: Broadcasts data to all devices; inefficient.

Table 4: Infrastructure-Related Concepts

Concept Description Impact on Performance

Performance is affected by core


The CPU is the brain of the
Processors (CPU) count, clock speed, and
computer, executing instructions.
architecture.

Modern processors may have More cores allow more


Core Count multiple cores for better simultaneous operations,
multitasking. enhancing performance.

Infra Support (OS) 25


The basic unit of time in a
computer's processor, during
Higher clock speeds (GHz) mean
Clock Cycle which the CPU performs
faster processing.
operations like executing
instructions. (Hz)

Reduces latency by keeping the


A small, high-speed memory that
Cache Memory most frequently used data close
stores frequently accessed data.
to the CPU.

- L1: Fastest, smallest, located


within the CPU.
Reduces CPU data access time
- L2: Larger, slower, located near
Cache Levels and improves system
the CPU.
performance.
- L3: Largest, shared by multiple
cores.

Allows a single CPU core to


Improves multitasking and
Threading handle multiple tasks
application performance.
simultaneously.

Efficiency of the PSU affects


Supplies electrical power to all
Power Supply Unit (PSU) power consumption and system
components of the computer.
stability.

Table 5: HDD vs. SSD Comparison


Feature HDD SSD

Magnetic storage with moving Flash memory with no moving


Storage Type
parts. parts.

Faster read/write speeds,


Slower read/write speeds,
Speed particularly for sequential and
especially for random access.
random access.

Prone to physical damage due to More durable as there are no


Durability
moving parts. moving parts.

Cost More cost-effective per GB. More expensive per GB.

Larger storage capacity available Lower storage capacity


Capacity
at lower prices. compared to HDDs.

Higher power consumption due to


Power Consumption Lower power consumption.
moving parts.

Disk Scheduling Algorithms in OS (Operating


System)
Purpose of Disk Scheduling
The main purpose of disk scheduling algorithm is to select a disk request from the queue of IO
requests and decide the schedule when this request will be processed.

Goal of Disk Scheduling Algorithm

Infra Support (OS) 26


Fairness

High throughout

Minimal traveling head time

Disk Scheduling Algorithms


The list of various disks scheduling algorithm is given below. Each algorithm is carrying some
advantages and disadvantages. The limitation of each algorithm leads to the evolution of a new
algorithm.

FCFS scheduling algorithm

SSTF (shortest seek time first) algorithm

SCAN scheduling

C-SCAN scheduling

LOOK Scheduling

C-LOOK scheduling

FCFS Scheduling Algorithm


It is the simplest Disk Scheduling algorithm. It services the IO requests in the order in which
they arrive. There is no starvation in this algorithm, every request is serviced.

SSTF Scheduling Algorithm


Shortest seek time first (SSTF) algorithm selects the disk I/O request which requires the least
disk arm movement from its current position regardless of the direction. It reduces the total
seek time as compared to FCFS.

Scan Algorithm
It is also called as Elevator Algorithm. In this algorithm, the disk arm moves into a particular
direction till the end, satisfying all the requests coming in its path, and then it turns backhand
moves in the reverse direction satisfying requests coming in its path.

It works in the way an elevator works, elevator moves in a direction completely till the last floor
of that direction and then turns back.

Algorithm Description Advantages Disadvantages Use Case

- Simple to Suitable for low-


FCFS (First- Processes I/O - Can lead to long
implement. load systems
Come-First- requests in the order wait times (convoy
- Fair for all where simplicity is
Serve) they arrive. effect).
processes. crucial.

Selects the request - Can cause


- Reduces overall Good for systems
SSTF (Shortest closest to the current starvation for
seek time with medium
Seek Time First) head position, requests far from
compared to FCFS. workloads.
minimizing seek time. the current head.

Infra Support (OS) 27


Moves the disk head
- Can waste time Suitable for
in one direction, - Reduces seek
SCAN (Elevator traveling to the systems with
servicing requests time for requests in
Algorithm) boundary evenly distributed
until the end, then one direction.
unnecessarily. I/O requests.
reverses.

Similar to SCAN, but


- Avoids - Slightly more Efficient for
stops at the farthest
unnecessary complex to systems with
LOOK request instead of
movement to the implement than clustered I/O
going to the disk
boundary. SCAN. requests.
boundary.

Like SCAN, but the


head only moves in
- Provides uniform - Jumping back to Best for systems
C-SCAN one direction and
wait time for all the start increases with a high volume
(Circular SCAN) jumps back to the
requests. seek time. of I/O requests.
start when reaching
the end.

Like LOOK, but the


- Optimizes
head only moves in Efficient for high-
movement by not - Slightly more
one direction and demand systems
C-LOOK going to complex than C-
jumps back to the with clustered
boundaries SCAN.
start after servicing requests.
unnecessarily.
the last request.

Backup and Recovery, Security Compliance, and


Computing Environments.
1. Backup and Recovery Practices
Aspect Description Example Key Point

Backing up an entire Simplifies recovery but


A complete copy of all
Full Backup database or system is time and storage-
data and system files.
daily. intensive.

Fast and storage-


Backs up only the data After a full backup on
efficient but recovery is
Incremental Backup changed since the last Monday, backing up
complex as it requires
backup. only Tuesday’s changes.
multiple backups.

After a full backup on


Monday, a differential Faster recovery than
Backs up all data
backup on Wednesday incremental but
Differential Backup changed since the last
includes all changes consumes more space
full backup.
from Monday to as data accumulates.
Wednesday.

Keep three copies of


Store backups on a local Ensures data
data: two local (on
3-2-1 Rule hard drive, a NAS, and redundancy and disaster
different devices) and
cloud storage. protection.
one off-site.

Infra Support (OS) 28


RTO (Recovery Time
Objective): Time to RTO: 2 hours to restore
Helps define business
restore operations. RPO a server. RPO: 15
RTO and RPO continuity requirements
(Recovery Point minutes of transaction
and backup frequency.
Objective): Max data loss.
acceptable data loss.

Strategy to recover IT
Documented plan for Includes roles,
infrastructure and
Disaster Recovery Plan restoring services after responsibilities, and
operations after a
a ransomware attack. recovery steps.
disaster.

2. Best Practices for Security and Compliance Controls


Aspect Description Example Key Point

A finance team member


Limits access based on Reduces insider threats
Role-Based Access can only access
user roles, granting and unauthorized
Control financial data, not HR
minimum privileges. access.
data.

Login requires a
Adds additional
Multi-Factor password and an OTP Enhances security
verification layers (e.g.,
Authentication sent to a registered beyond passwords.
OTP, biometric).
mobile.

HTTPS ensures
Encrypt data at rest (on encrypted Protects sensitive
Data Encryption disk) and in transit communication between information from
(during transmission). a browser and a web interception.
server.

Documented process to Responding to a


Minimizes impact and
identify, respond to, and malware infection with
Incident Response Plan downtime during
recover from security isolation and data
security incidents.
incidents. recovery steps.

Adherence to industry Storing healthcare data


Compliance Ensures legal and ethical
regulations like GDPR, according to HIPAA
Frameworks handling of data.
HIPAA, and PCI-DSS. standards.

Educate employees on Conducting phishing Reduces human errors


Employee Training security protocols and simulation training for and improves security
practices. employees. posture.

3. Computing Environments
Unix/Linux
Aspect Windows Environment Key Point
Environment

GUI-based OS with Windows is user-


system services for Multi-user, multitasking friendly, Unix/Linux
Architecture
managing OS; flexible and robust. offers high performance
hardware/software. and scalability.

File System NTFS: Supports large ext4 (Linux), UFS Unix/Linux file systems
files, journaling, and (Unix): Hierarchical are highly customizable

Infra Support (OS) 29


security permissions. directory structure, for performance and
permissions. security.

PowerShell and Command Shells like Bash and Zsh Unix/Linux CLI is more
Command Line
Prompt for scripting and allow automation and powerful for advanced
Interface
administrative tasks. advanced scripting. users and admins.

Unix/Linux allows
Active Directory manages User/group permission detailed access control;
User Management users and permissions model with file-level Windows is easier to
across networks. access control. manage in enterprise
environments.

Unix/Linux is cost-
Often open-source and
Proprietary and requires effective; Windows is
Licensing free (e.g., Ubuntu,
paid licensing. preferred for enterprise
CentOS).
solutions.

Unix/Linux is ideal for


Extensive customization
Limited customization; developers and
Customization via CLI and open-
primarily GUI-driven. administrators seeking
source tools.
tailored solutions.

Choose Windows for


Windows Server for Linux used for web GUI apps and user
Example Use Cases enterprise-level servers (e.g., Apache, ease; Unix/Linux for
application hosting. Nginx), cloud platforms. scalable and secure
server environments.

Command Line Interface (CLI) commands for


Windows and Unix/Linux environments:
Operation Windows Command Unix/Linux Command Description

List Files dir ls List files and directories

Change Directory cd cd Change directory

Create Directory mkdir mkdir Create a new directory

Remove an empty
Remove Directory rmdir rmdir
directory

Delete File del rm Delete a file

Copy Files copy cp Copy files

Move/Rename File move mv Move or rename files

Clear the command line


Clear Screen cls clear
screen

List Processes tasklist ps aux List running processes

Show network
Network Configuration ipconfig ifconfig
configuration

Check network
Check Network ping ping
connectivity

Infra Support (OS) 30


Change File
cacls / icacls chmod Change file permissions
Permissions

Change File Ownership icacls chown Change file ownership

Shut down or reboot the


Shutdown shutdown reboot
system

Display a message in the


Display Message echo echo
terminal

Search for text within a


Search Text in File find grep
file

Check Disk for Errors chkdsk fsck Check the disk for errors

Format a disk (Windows


Format Disk format N/A
only)

Add or manage user


Manage Users net user useradd
accounts

Add or manage user


Manage Groups net localgroup groupadd
groups

Infra Support (OS) 31

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy