0% found this document useful (0 votes)
10 views39 pages

Os Imp Ques Sem

The document outlines key concepts related to operating systems, including definitions of processes, process control blocks, CPU scheduling, and types of directory structures. It discusses the advantages and disadvantages of high-level programming languages for OS development, as well as various scheduling methods and issues like deadlock and race conditions. Additionally, it covers memory management techniques such as paging, file attributes, and the structure of operating systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views39 pages

Os Imp Ques Sem

The document outlines key concepts related to operating systems, including definitions of processes, process control blocks, CPU scheduling, and types of directory structures. It discusses the advantages and disadvantages of high-level programming languages for OS development, as well as various scheduling methods and issues like deadlock and race conditions. Additionally, it covers memory management techniques such as paging, file attributes, and the structure of operating systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

OS IMP QUES SEM

2M
UNIT 1
1.LIST THE ADVANTAGES AND DISADVANTAGES OF WRITING OPERATING SYSTEM IN HIGH LEVEL
LANGUAGE SUCH AS C

2.DEFINE PROCESS AND PROESS CONTROL BLOCK


Process
A process is a program in execution. When you run a program (like a game or a browser), the operating system
treats it as a process, which includes all the necessary data, instructions, and resources to keep it running.

Process Control Block (PCB)


The Process Control Block is a data structure that holds essential information about a process. Think of it as a
"file" where the operating system stores everything it needs to manage the process, like the process ID, its
current state, registers, memory locations, and other details. The PCB helps the OS keep track of each process’s
status and manage its execution.

3.DIFFERENTIATE OS AND KERNELS. GIVE THE GOALS OF OPERATING SYSTEM

OS KERNEL
Operating System is a system software that The kernel is the core part of an operating
acts as an intermediary between a user and system that provides essential services for all
computer hardware to enable convenient other partsof the operating system and
usage of the system and efficient utilization of applications.
resources.

The commonly required resources are


input/output devices, memory, file storage It manages resources such as
space, CPU etc. Also, an operating system is a memory and CPU time, handles
program designed to run other programs on a communication between hardware
computer. and software components, and
ensures security and isolation
between different processes and
users.

1. To execute user programs


2. Make solving user problems easier.
3. Make the computer system convenient to use.
4. Use the computer hardware in an efficient manner.
4.DIFF BW USER LEVEL AND KERNEL LEVEL THREAD

5. DIFF BW PROCESS AND THREAD

UNIT 2
1.DEFINE CPU SCHEDULING AND WHAT ARE THE 3 DIFFERENT TYPES OF SCHEDULING QUEUES
CPU scheduling is the process of deciding which task (or process) the CPU should work on at any given time.
Since the CPU can only work on one process at a time, the operating system manages a queue of tasks waiting for
CPU time. Scheduling ensures that each process gets a fair turn, improving efficiency and making sure the system
runs smoothly by switching between tasks quickly. This is especially important in systems where multiple
programs or users need to run at the same time.

Job Queue
Holds all the processes waiting to enter the system. New processes go here first.

Ready Queue
Contains processes loaded in memory and ready for CPU execution, waiting for their turn.

Device (I/O) Queue


Contains processes waiting for I/O tasks (like disk or printer). After I/O, they go back to the ready queue.

2. STATE CONVOY EFFECT AND SOLUTIONS TO OVERCOME IT


The convoy effect occurs in scheduling when a group of processes or threads with long execution times
monopolizes the CPU, causing shorter processes or threads to wait for a long time. This effectcan lead to poor
system performance and increased waiting times for short jobs.

Solutions:

➢ Pre-emptive Scheduling: Allows interrupting long processes in Favor of shorter ones.


➢ Priority-Based Scheduling: Gives preference to shorter or higher-priority processes.
➢ Aging: Increases priority of long-waiting processes to prevent starvation.
➢ Round-Robin Scheduling: Allocates CPU time in equal time slices to all processes
3.DRAW THE PROCESS STATE / PROCESS LIFE CYCLE AND THE DIAGRAM SHOWING CPU SWITCH
PROCESS TO PROCESS

4.WHAT IS COOPERATIVE PROCESS


➢ Processes executing concurrently in the operating system may be either independent processes or
cooperating processes.
➢ A process is independent if it cannot affect or be affected by the other processes executing in the
system.
➢ Any process that does not share data with any other process is independent.
➢ There are several reasons for providing an environment that allows process cooperation:
1. Information sharing
2. Convenience
3. Computation speedup
UNIT 3
1.NAME TWO HARDWARE INSTRUCTIONS AND THEIR DEFINITIONS WHICH CAN BE USED FOR
IMPLEMENTING MUTUAL EXCLUSION
Test-and-Set (TAS)
This instruction checks the value of a memory location (like a lock variable) and sets it to a specified value (e.g.,
1) in a single, uninterruptible step. If the previous value was 0, it means the lock was free, and now it’s acquired;
if it was already 1, another process holds the lock. This helps prevent multiple processes from entering the
critical section simultaneously.

Compare-and-Swap (CAS)
This instruction compares the value at a memory location with an expected value, and if they match, it updates
the memory location to a new value. If not, the operation fails. This allows a process to check if a lock is free
(expected value) and, if so, acquire it by setting it to a new value, ensuring atomic access to shared resources.

2.DIFFERENTIATE DEADLOCK AND STARVATION AND ENLIST THE METHODS TO RECOVER FROM
DEADLOCK

DEADLOCK STARVATION
A situation where two or more processes are A situation where a process is unable to make
waiting for each other to release resources, progress indefinitely, despite the system being
resulting in a circular dependency and active and other processes making progress.
preventing any process from proceeding.

CONDITIONS CONDITIONS
➢ Mutual exclusion ➢ Priority-based scheduling
➢ Hold and wait ➢ Aging
➢ No pre-emption ➢ Resource allocation
➢ Circular wait

Process Termination

The OS terminates all processes involved in the deadlock to free up resources quickly.

The OS terminates one process at a time from the deadlock set

Resource Pre-emption

The OS temporarily reclaims resources from some deadlocked processes and reallocates them to others.

The OS must decide which processes to pre-empt, which resources to reclaim, and ensure the system state can
be restored if needed, requiring careful handling.
3.DEFINE RACE CONDITION AND DEFINE CRITICAL SECTION PROBLEM
RACE CONDITION

A race condition occurs when multiple threads or processes are attempting to access a shared resource
simultaneously, and the outcome of the operation depends on the order in which the accesses occur. This can
lead to unpredictable and often incorrect results.

Consider two threads, A and B, trying to increment a shared variable count.

➢ Thread A: Reads count (value: 0)


➢ Thread B: Reads count (value: 0)
➢ Thread A: Increments count (value: 1)
➢ Thread B: Increments count (value: 1)
➢ Final value of count: 1 (instead of the expected 2)
CS PROBLEM

Critical section problem is to design protocol to solve this. Each process must ask permission to enter critical
section in entry section, may follow critical section with exit section, then remainder section. Consider system of
n processes {p0, p1, … pn-1}. Each process has critical section segment of code

➢ Process may be changing common variables, updating table, writing file, etc
➢ When one process in critical section, no other may be in its critical section

4.DEFINE RESOURSE ALLOC GRAPH


➢ A set of vertices V and a set of edges E.
➢ V is partitioned into two types:
➢ P = {P1, P2, …, Pn}, the set consisting of all the processes in the system
➢ R = {R1, R2, …, Rm}, the set consisting of all resource types in the system
➢ request edge – directed edge Pi ® Rj
➢ assignment edge – directed edge Rj ® Pi
COMPONENTS
UNIT 4
1.WHAT IS THE PURPOSE OF PAGING THE PAGE TABLE
Paging is a memory management technique used in operating systems to divide both physical and logical
memory into fixed-size blocks called frames and pages, respectively. This allows processes to be stored and
accessed in non-contiguous chunks, eliminating the need for contiguous allocation. Examples are:

➢ Virtual Memory
➢ Process Isolation
➢ Dynamic Memory Allocation
➢ Memory-Mapped Files
Paging the page table helps manage memory more efficiently, especially when the page table itself is large.
Instead of keeping the entire page table in main memory, the OS divides it into smaller pages and only loads
parts of it as needed.

2.WHAT ARE THE DIFFERENT TYPES OF FILE DIRECTORY STRUCTURE


Single-Level Directory

➢ All files are stored in one directory, making it simple to manage but difficult to organize as the number of
files increases.
Two-Level Directory

➢ Each user has a separate directory. Users can store their files in their directories, improving organization
but still somewhat limited in structure.
Tree-Structured Directory

➢ Files are organized hierarchically in a tree-like structure. This allows for subdirectories, making it easier
to manage large numbers of files and providing a more organized way to access files.
Acyclic Graph Directory

➢ Allows directories and files to have multiple parent directories, enabling shared access. This can be
efficient for storing common files in multiple locations without duplication.
General Graph Directory

➢ Similar to an acyclic graph but allows cycles (loops). This structure provides maximum flexibility but can
complicate file management and navigation.

3. DIFF BW LOGICAL ADDRESS SPACE AND PHYSICAL ADDRESS SPACE


4.DEFINE BEST FIT,FIRST FIT,WORST FIT
Best-Fit:

➢ Concept: Searches for the memory block whose size is closest to the requested size, minimizing wasted
space.
➢ Pros: Efficiently utilizes memory by minimizing fragmentation.
➢ Cons: Can be computationally expensive, especially for large memory spaces.
First-Fit:

➢ Concept: Searches for the first memory block whose size is greater than or equal to the requested size.
➢ Pros: Simple and efficient to implement, often suitable for smaller memory spaces.
➢ Cons: Can lead to fragmentation, especially if many small blocks are allocated.
Worst-Fit:

➢ Concept: Searches for the largest available memory block, regardless of its size compared to the
requested size.
➢ Pros: Can potentially reduce fragmentation in some cases, especially if large blocks are frequently
allocated.
➢ Cons: Often leads to more fragmentation than best-fit or first-fit, especially for smaller memory spaces.

5.DEFINE FILE AND GIVE 4 ATTRIBUTES AND OPERATIONS


File in an operating system is a named collection of data organized in a specific format. It serves as a fundamental
unit of storage for information, providing a structured way to store, retrieve, and manage data.

➢ Name – only information kept in human-readable form

➢ Identifier – unique tag (number) identifies file within file system

➢ Type – needed for systems that support different types

➢ Location – pointer to file location on device

File is an abstract data type

➢ Create

➢ Write – at write pointer location

➢ Read – at read pointer location

➢ Delete
UNIT 5
1.WHY ROTATIONAL LATENCY IS NOT CONSIDERED IN DISK SCHEDULING
Rotational latency is the time it takes for the desired sector of a disk to rotate under the read/write head after
the head has reached the correct track. It's the delay caused by the spinning motion of the disk that determines
when the data can be accessed.

Rotational latency is often not considered in disk scheduling for these simple reasons:

1. Less Impact: The time it takes for the disk to spin to the right position (rotational latency) is usually less
important than the time it takes for the read/write head to move to the correct track (seek time).

2. Predictable Timing: Once the head is over the right track, the time it takes for the correct data to come
around is fairly constant and can be averaged out.

3. High-Level Focus: Disk scheduling usually looks at broader access patterns rather than focusing on tiny
details like the spin of the disk, which makes it easier to manage.

4. Caching Benefits: Modern systems use caching to reduce the number of times they need to access the
disk, further reducing the effect of rotational latency.

2.LIST THE VARIOUS TYPES OF SHELLS IN LINUX


A shell is a program that lets you interact with the operating system by typing commands. It acts as a command-
line interface, allowing you to run programs, manage files, and automate tasks through scripts. Different types of
shells have different features and ways of working.

➢ Bourne Shell (sh)


➢ Bourne Again Shell (bash)
➢ C Shell (csh)
➢ Korn Shell (ksh)
➢ Z Shell (zsh)
➢ Dash (Debian Almquist Shell)
➢ Fish (Friendly Interactive Shell)
3.DIFFERENTITAE KERNEL AND SCRIPTING
12M
UNIT 1
1.DISCUSS ABOUT THE STRUCTURE, SYSTEM COMPONENTS AND SERVICES OF AN OPERATING
SYSTEM.
DEFINITION

Operating System is a system software that acts as an intermediary between a user and Computer Hardware to
enable convenient usage of the system and efficient utilization of resources.

The commonly required resources are input/output devices, memory, file storage space, CPU etc. Also, an
operating system is a program designed to run other programs on a computer.

OS is considered as the backbone of a computer, managing both software and hardware resources. They are
responsible for everything from the control and allocation of memory to recognizing input from external devices
and transmitting output to computer displays.

GOALS
1. To execute user programs
2. Make solving user problems easier.
3. Make the computer system convenient to use.
4. Use the computer hardware in an efficient manner.

CLASSIFICATION

➢ Multi-user OS
➢ Multiprocessing OS
➢ Multitasking OS
➢ Multithreading OS
➢ Real time OS
OPERATING SYSTEM STRUCTURES

An OS provides the environment within which programs are executed. Internally, Operating Systems vary greatly
in their makeup, being organized along many different lines. The design of a new OS is a major task. The goals of
the system must be well defined before the design begins. The type of system desired is the basis for choices
among various algorithms and strategies. An OS may be viewed from several vantage ways:

➢ By examining the services that it provides.


➢ By looking at the interface that it makes available to users and programmers.
➢ By disassembling the system into its components and their interconnections.

1.SIMPLE STRUCTURE

Many commercial systems do not have well-defined structures. Frequently, such operating systems started as
small, simple, and limited systems and then grew beyond their original scope. MS-DOS is an example of such a
system. It was originally designed and implemented by a few people who had no idea that it would become so
popular. Eg: MS-DOS is written to provide the most functionality in the least space. It is divided into modules.

2.LAYERED STRUCTURE

In an Operating System (OS), the layered structure organizes the system into distinct levels, each responsible for
specific tasks. Each layer communicates only with the layer directly beneath it, promoting modularity. Each layer
has specific tasks. Each layer communicates only with the layer below it. Changes to one layer don’t affect others.

Layers in an OS:

➢ Layer 0: Hardware – The physical components (CPU, RAM, disk).


➢ Layer 1: Hardware Abstraction Layer (HAL) – Provides an interface between hardware and higher layers,
making the OS hardware-independent.
➢ Layer 2: Kernel – Manages resources (CPU, memory) and system calls.
➢ Layer 3: System Programs – Handles tasks like file management, process control, and networking.
➢ Layer 4: User Interface – The interface for user interaction (CLI or GUI).
3.MICROKERNAL

A microkernel is a type of operating system design that keeps the core (kernel) small and only handles essential
tasks, like process management and memory handling. Everything else, such as device drivers and file systems,
runs outside the kernel in user space. Eg: Tru64 UNIX

4.MODULAR
In a modular operating system (OS) design, the system is broken down into independent, interchangeable
components or modules. Each module is responsible for a specific task and interacts with other modules through
well-defined interfaces. You can replace or extend individual modules without changing the whole system. This
makes it easier to update the system or add new features. Eg: Solaris OS
SYSTEM COMPONENTS

System components in an operating system (OS) refer to the various parts that work together to manage
hardware and software resources, enabling users and applications to interact with the system. These
components ensure that the system is efficient, stable, and secure.

Process Management:

• Manages processes (programs in execution).

• Handles process creation, deletion, synchronization, communication, and deadlock management.

Main Memory Management:

• Manages the computer's RAM (memory).

• Allocates and deallocates memory, and decides which processes to load into memory.

File Management:

• Manages files and directories.

• Handles creation, deletion, and access of files, as well as mapping files to storage devices and backups.

I/O System Management:

• Manages input/output devices (e.g., keyboard, disk).

• Provides an interface for device drivers and handles buffering and caching.

Secondary-Storage Management:

• Manages disks and other non-volatile storage.

• Handles free space, allocation, and disk scheduling.

Networking:

• Manages communication between devices in a distributed system.

• Handles data transfer, network protocols, and ensures security and routing.

Protection System:

• Protects system resources (memory, files, processes) from unauthorized access.

• Enforces security controls and access permissions.

Command Interpreter System:

• Allows user interaction with the OS by interpreting commands.

• Translates commands into system actions like process management, I/O, memory, file handling, and
networking.
OS SERVICES

An operating system (OS) provides a set of services that help manage resources and make the execution of
programs easier for both the user and the programmer. These services may vary between different OSes, but
here are the most common ones:

Program Execution:

➢ The OS loads a program into memory and runs it.


➢ It ensures the program ends properly, either by completing its tasks or by handling errors if the program
terminates abnormally.

I/O Operations:

➢ The OS handles input and output (I/O) operations for the running program.
➢ Direct hardware access is usually restricted, so the OS provides mechanisms to perform I/O operations
like reading/writing data to files or devices.

File System Manipulation:

➢ The OS allows programs to read, write, create, delete, and manage files and directories.
➢ It also handles file permissions and ensures that files are accessed efficiently.

Communication:

➢ Processes need to exchange information. Communication can be:


➢ Within the same computer (via shared memory or message passing).
➢ Across different computers over a network (using communication protocols).

Error Detection:

➢ The OS constantly monitors for errors in:


➢ Hardware (e.g., memory errors, I/O device failures).
➢ Software (e.g., program errors like arithmetic overflow or accessing illegal memory).
➢ The OS detects these errors and takes appropriate actions to maintain system stability and reliability.

Resource Allocation:

➢ The OS allocates system resources (e.g., CPU time, memory, I/O devices) among multiple users or
processes.
➢ It ensures that each process gets the resources it needs without conflicts, especially in multi-user or
multi-tasking environments.

Accounting:

➢ The OS tracks resource usage by different users or processes.


➢ This data can be used for billing, usage statistics, or system optimization.

Protection & Security:

➢ Protection ensures that processes cannot interfere with each other’s memory or resources.
➢ Security protects the system from unauthorized access, often through mechanisms like user
authentication, encryption, and firewalls.
➢ The OS manages both protection (e.g., access control) and security to prevent data breaches.
2.OUTLINE ABOUT DEVICE MANAGEMENT IN OS 6M
Device Management is the part of the operating system responsible for managing hardware devices. It ensures
that devices are properly controlled, monitored, and utilized by the system and applications.

Functions:

Device Driver Management:


➢ Loads and maintains device drivers that allow the OS to communicate with hardware.

Device Allocation:
➢ Assigns devices to processes when needed and manages multiple requests for the same device.

I/O Scheduling:
➢ Organizes and prioritizes input/output requests to ensure efficient device access and minimize waiting
time.

Interrupt Handling:
➢ Responds to interrupts from devices, signaling the OS that a device needs attention or has completed a
task.

Error Handling:
➢ Detects and manages errors related to devices, ensuring that issues are addressed without affecting the
overall system.

Device Status Monitoring:


➢ Monitors the status of devices to determine availability and performance, helping to optimize resource
usage.

Approaches:

Direct I/O Approaches


➢ Uses special ports for devices. The CPU has unique commands to send and receive data to/from these
ports.
➢ Devices notify the CPU when they need attention. The CPU responds to these notifications to
communicate with the device.

Memory-Mapped I/O Approaches


➢ Devices share the same address space as memory, allowing the CPU to use regular memory commands
to talk to them.
➢ Memory-mapped I/O can let devices and applications share parts of memory for easier communication
and data transfer.

Direct Memory Access (DMA) Approaches


➢ A special piece of hardware that moves data between devices and memory without needing the CPU all
the time.
➢ The DMA controller quickly transfers a bunch of data at once before letting the CPU take control again.
EXPLAIN ABOUT SYSTEM CALLS AND THREADS 10M
SYSTEM CALLS
System calls provide the interface between a process and the operating system. These calls are generally
available as assembly-language instructions. Some systems also allow to make system calls from a high-level
language, such as C, C++.

As an example of how system calls are used, consider writing a simple program to read data from one file and to
copy them to another file. The first input that the program will need is the names of the two files:

➢ The input file


➢ The output file
These names can be specified in many ways, depending on the OS design. Once the two file names are obtained,
the program must open the input file and create the output file. Each of these operations requires another
system call and may encounter possible error conditions.

When the program tries to open the file, it may find that no file of that name exists or that the file is protected
against access. In these cases, the program should print a message on the console and then terminate
abnormally.

➢ If the input file exists, then we must create a new output file.
➢ We may find an output file with the same name.
➢ This situation may cause the program to abort (a system call), or we may delete the existing file (another
system call).

Now that both the files are setup, we enter a loop that reads from the input file (a system call) and writes to the
output file (another system call). Each read and write must return status information regarding various possible
error conditions.

On input, the program may find that the end of file has been reached, or that a hardware failure occurred in the
read (such as a parity error).

On output, Various errors may occur, depending on the output device (such as no more disk space, physical end
of tape, printer out of paper).

Finally, after the entire file is copied, the program may close both files (another system call), writes a message to
the console (more system calls), and finally terminates normally (the final system call).
THREADS

A thread is a basic unit of CPU utilization. A thread, sometimes called as light weight process whereasa
process is a heavyweight process. Thread comprises of:
➢ A thread ID
➢ A program counter
➢ A register set
➢ A stack.
Traditionally a process contained only a single thread of control as it ran, many modern operating
systems have extended the process concept to allow a process to have multiple threads of executionand
thus to perform more than one task at a time.

BENEFITS
➢ Responsiveness
➢ Resource sharing
➢ Economy
➢ Utilization of multiprocessor architectures

USER THREADS KERNEL THREADS

User-level threads are created, scheduled, and Kernel threads are created,
managed by a thread library within user space, scheduled, and managed directly
without kernel involvement. This makes their by the operating system kernel.
management generally faster and more Thisinvolves more overhead
efficient. compared to user threads.

The operating system kernel does not recognize


or manage user-level threads directly. In a multiprocessor environment,
Therefore, all thread operations are handled the kernel can distribute threads
entirely in user space by the thread library. across different processors,
enabling true parallel execution
and better utilization of multiple
cores.
Examples of User-Level Threads: Examples of Kernel Threads:
➢ POSIX Pthreads ➢ Windows NT
➢ Mach C-threads ➢ Windows 2000
➢ Solaris 2 UI-threads ➢ Tru64 UNIX (formerly Digital UNIX)
UNIT 2
1.ILLUSTRATE IN DETAIL ABOUT PROCESS, PCB, OPERATIONS ON PROCESS AND COOPERATING
PROCESS
PROCESS

➢ A process is mainly a program in execution where the execution of a process must progress in a
sequential order or based on some priority or algorithms.
➢ In other words, it is an entity that represents the fundamental working that has been assigned to a
system.
➢ When a program gets loaded into the memory, it is said to as process. This processing can be categorized
into 4 sections.

➢ STACK - The process Stack contains the temporary data such as method/function parameters, return-
address and local variables.
➢ HEAP - This is dynamically allocated memory to a process during its run time.
➢ TEXT - This includes the current activity represented by the value of Program Counter and the contents
of the processor's registers.
➢ DATA - This section contains the global and static variables.

PROCESS CONTROL BLOCK (PCB)


➢ A Process Control Block is a data structure maintained by the Operating System for every process.
➢ The PCB is identified by an integer process ID (PID).
➢ A PCB keeps all the information needed to keep track of a process as listed below,
PROCESS PRIVILEGES

This is required to allow/disallow access to system resources.

PROCESS ID

Unique identification for each of the process in the operating system.

POINTER

A pointer to parent process.

PROGRAM COUNTER

Program Counter is a pointer to the address of the next instruction to be executed for this process.

CPU REGISTERS

Various CPU registers where process need to be stored for execution for running state.

CPU SCHEDULING INFORMATION

Process priority and other scheduling information which is required to schedule the process.

ACCOUNTING INFORMATION

This includes the amount of CPU used for process execution, time limits, execution ID etc.

IO STATUS INFORMATION

This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may contain different information in
different operating systems. (shown in above diagram)
OPERATIONS ON PROCESS
Creation:

➢ When a new process is needed, the operating system creates it.


➢ This involves allocating memory, initializing process attributes (like its ID and state), and adding it to the
process table, which keeps track of all processes.
Execution:

➢ The process is given CPU time to perform its tasks.


➢ The operating system uses scheduling algorithms to decide which process runs next based on priority
and state (ready, waiting, etc.).
Waiting:

➢ A process may need to wait for a resource (like I/O operations) before it can continue.
➢ The process state changes to "waiting," and it is temporarily removed from the CPU until the resource is
available.
Termination:

➢ Once a process completes its task, it is terminated.


➢ The operating system cleans up by releasing allocated resources, removing it from the process table, and
updating the system status.
Suspension and Resumption:

➢ A running process can be temporarily suspended (paused) and later resumed.


➢ The operating system saves the process state and moves it to a suspended state, allowing other
processes to run. When resumed, it returns to the state it was in before suspension.

COOPERATING PROCESS
Cooperating processes are processes that work together to complete a task. They often share data and resources.

Inter-Process Communication (IPC):

➢ Processes need to communicate with each other to coordinate actions.


➢ IPC mechanisms like message passing or shared memory allow processes to send and receive messages
or access shared data.
Synchronization:

➢ When multiple processes access shared resources, synchronization ensures that they do not interfere
with each other.
➢ Techniques like semaphores, mutexes, and locks help manage access to shared resources, preventing
data corruption or inconsistency.
Deadlock Prevention:

➢ In a system where processes wait indefinitely for resources, a deadlock can occur.
➢ Strategies like resource ordering, hold and wait conditions, or using a timeout can prevent deadlocks,
ensuring that cooperating processes continue to make progress.
Process Sharing:

➢ Processes can share data or resources to complete a task.


➢ Shared memory allows processes to access common data, which requires careful synchronization to
avoid conflicts.
2.CONSIDER THE FOLLOWING FIVE PROCESSES, WITH THE LENGTH OF THE CPU BURST TIME GIVEN IN
MILLISECONDS.
Process Burst time
P1 10
P2 29
P3 3
P4 7
P5 12
Consider the FCFS, Non-Preemptive Shortest Job First (SJF), Round Robin (RR) (quantum=10ms) scheduling
algorithms. Illustrate the scheduling using Gantt chart and find the Average waiting time.

FCFS
➢ ARRIVAL TIME: Time taken for the arrival of each process in the CPU Scheduling Queue.
➢ COMPLETION TIME: Time taken for the execution to complete, starting from arrival time.
➢ TURN AROUND TIME: Time taken to complete after arrival.
➢ WAITING TIME: Total time the process has to wait before it begins execution.

First-Come, First-Served (FCFS) is a non-pre-emptive scheduling algorithm where the process that arrives first in
the ready queue is the one that gets executed first. In other words, the CPU processes requests in the order they
arrive, much like a queue at a bank or a ticket counter.
NON-PRE-EMPTIVE SJF

➢ This is also known as shortest job next, or SJN


➢ This is a non-pre-emptive, pre-emptive scheduling algorithm.
➢ Best approach to minimize waiting time.
➢ Easy to implement in Batch systems where required CPU time is known in advance.
➢ Impossible to implement in interactive systems where required CPU time is not known.
➢ The processer should know in advance how much time process will take.

ROUND-ROBIN

➢ Round Robin is the pre-emptive process scheduling algorithm.


➢ Each process is provided a fix time to execute, it is called a quantum.
➢ Once a process is executed for a given time period, it is pre-empted and other process
executes for a given time period.
➢ Context switching is used to save states of pre-empted processes.
UNIT 3

1.EXPLAIN THE SOLUTION FOR CRITICAL SECTION PROBLEM WITH SUITABLE ALG’S
The Critical Section Problem refers to a situation in concurrent programming where multiple processes (or
threads) need to access a shared resource (like a variable, file, or hardware) at the same time. The problem arises
when processes must work together without interfering with each other.

Mutual Exclusion -If process Pi is executing in its critical section, then no other processes can be executing in their
critical sections

Progress -If no process is executing in its critical section and there exist some processes that wish to enter their
critical section, then the selection of the processes that will enter the critical section next cannot be postponed
indefinitely

Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their
critical sections after a process has made a request to enter its critical section and before that request is granted

➢ Assume that each process executes at a nonzero speed


➢ No assumption concerning relative speed of the n processes
1.PETERSONS SW SOLUTION
Peterson's solution is a classic algorithm for mutual exclusion, which ensures that only one process can accessa
shared resource at a time. It's a hardware-independent solution that uses only shared variables and busy
waiting.

SHARED VARIABLES

➢ flag[i]: Indicates if process i is interested in entering the critical section.


➢ turn: A variable to determine which process can enter the critical section next.
2.HARDWARE SOLUTION
1.TEST_AND_SET INSTRUCTION

➢ Executed atomically
➢ Returns the original value of passed parameter
➢ Set the new value of passed parameter to “TRUE”.
boolean test_and_set (boolean *target)
{boolean rv = *target;
*target = TRUE;
return rv:}
do {while (test_and_set(&lock))
; /* do nothing */
/* critical section */
lock = false;
/* remainder section */
} while (true);

2.COMPARE_AND_SWAP INSTRUCTION

➢ Executed atomically
➢ Returns the original value of passed parameter “value”
➢ Set the variable “value” the value of the passed parameter “new_value” but only if “value”
==“expected”. That is, the swap takes place only under this condition.
int compare _and_swap(int *value, int expected, int new_value) {
int temp = *value;
if (*value == expected)
*value = new_value;
return temp; }
do {while (compare_and_swap(&lock, 0, 1) != 0)
; /* do nothing */
/* critical section */
lock = 0;
/* remainder section */
} while (true);

3.SEMAPHORE IMPLEMENTATION
Semaphores are synchronization primitives that can be used to manage access to shared resources. They can help
solve the critical section problem by ensuring mutual exclusion, allowing only one thread to enter the critical
section while others wait.
wait(semaphore *S)
{ S->value--;
if (S->value < 0) {
add this process to S->list;
block(); } }
signal(semaphore *S)
{ S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P); }}
2.EXPLAIN DEADLOCK AVOIDANCE AND SOLVE A BANKER’S ALGORITHM PROBLEM
DEADLOCK AVOIDANCE
Deadlock avoidance is a strategy used in operating systems to prevent deadlocks from occurring. A deadlock is a
situation where two or more processes are unable to proceed because each is waiting for the other to release a
resource. Deadlock avoidance ensures that the system never enters a deadlock state.

The Banker's Algorithm is a resource allocation algorithm used in operating systems to ensure that a system is in
a safe state, preventing deadlocks. It's named after a banking analogy where a bank must allocate loans to
customers while ensuring that the bank remains solvent.

Developed by Edsger Dijkstra, this algorithm checks resource requests against the maximum needs of processes.
It determines whether granting the request will leave the system in a safe state. It uses the concepts of maximum
demand, current allocation, and available resources to make decisions.

DATA STRUCTURES FOR THE BANKER’S ALGORITHM


Let n = number of processes, and m = number of resources types.

Available: Vector of length m.

If available [j] = k, there are k instances of resource type Rj available

Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of resource type Rj

Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k instances of Rj

Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to complete its task

Need [i,j] = Max[i,j] – Allocation [i,j]

EXPLANATION
➢ Resource Allocation: The algorithm manages the allocation of resources to processes in a system.
➢ Safe State: A safe state is a condition where there exists a sequence of processes that can complete their
execution without causing a deadlock.
➢ Resource Matrix: The algorithm uses a matrix to represent the available resources, allocated resources,
and maximum resource needs of each process.
➢ Need Calculation: The Need matrix is calculated by subtracting the allocated resources from the
maximum resource needs.
➢ Safety Check: The algorithm iteratively checks for a process whose Need is less than or equal to the
available resources. If found, the process is marked as finished and its resources are released.
➢ Safe Sequence: If all processes can be marked as finished, a safe sequence exists, indicating that no
deadlock will occur
UNIT 4
1.EXPLAIN HOW PAGING SUPPORTS VIRTUAL MEMORY. WITH NEAT DIAGRAM EXPLAIN HOW IS
LOGICAL ADDRESS TRANSLATED INTO PHYSICAL ADDRESS?
PAGING

Paging is a memory management technique used by operating systems to manage how data is stored and
retrieved in memory. It allows the operating system to break up physical memory into fixed-size blocks called
pages and maps these pages to physical memory frames.

VITUAL MEMORY

Virtual memory is a memory management technique that allows a computer to use more memory than is
physically available by using disk space as an extension of RAM.

WORKING OF PAGING WITH VITUAL MEMORY

Logical Address Space:

Each process has its own logical address space, which is divided into fixed-size units called pages. For example, if
a process needs 4 GB of memory, it can be divided into pages (e.g., 4 KB each).

Physical Memory Frames:

Physical memory (RAM) is divided into frames of the same size as the pages. This means that the operating
system can load pages into any available frame in RAM.

Page Table:

The operating system maintains a page table for each process. This table keeps track of which pages are currently
loaded in physical memory and where they are located.

If a page is not in physical memory (a situation known as a page fault), the operating system retrieves it from disk
storage (where it’s stored temporarily).

Loading Pages on Demand:

With virtual memory, the system only loads the pages that are currently needed into RAM. This means that not
all pages of a program need to be in memory at once. For instance, if a program uses 8 pages but only 4 are
currently needed, only those 4 pages are loaded into RAM.

Efficient Memory Usage:

By allowing processes to use more memory than what is physically available and loading pages as needed, paging
makes better use of the available RAM and helps run larger applications smoothly.

Swapping:

When physical memory is full, the operating system can swap out less-used pages to disk (known as paging out)
to make room for new pages that need to be loaded into RAM. When the swapped-out pages are needed again,
they can be loaded back into memory (known as paging in).
TRANSLATING LOGICAL ADDRESS TO PHYSICAL ADDRESS

In an operating system that uses paging, the translation of a logical address to a physical address involves several
steps.

CPU Requests Data:

➢ The CPU needs to read data and uses a logical address to request it. Think of this address as a reference
number that tells the system which data is needed.
Finding the Page Table:

➢ The logical address consists of two parts:


➢ Page Number: This tells the system which page of data is being requested.
➢ Offset: This specifies the exact location of the data within that page.
Getting the Frame Number:

➢ The page table provides the frame number, which is like a shelf number in a library where the data can
be found. This frame number points to the location in physical memory.
Calculating the Physical Address:

➢ To find the exact physical address in RAM, the system combines the frame number with the offset from
the logical address.
➢ The formula is:
➢ Physical Address = (Frame Number × Frame Size) + Offset
Reading the Data:

Now, the CPU goes to the calculated physical address in RAM and reads the data it requested.

IMPORTANCE IN AN OPERATING SYSTEM

Efficient Memory Management:


Paging helps the operating system manage memory by dividing it into fixed-size pages

Process Isolation:
Each process has its own page table, ensuring that one process cannot access or change the memory of another
process.

Virtual Memory:
The operating system can give the illusion of having more memory than is physically available by using
techniques like paging and swapping data in and out of memory.
2.EXPLAIN PAGE REPLACEMENT ALGORITHM WITH AN EXAMPLE
A page replacement algorithm is a method used by the operating system to decide which pages to remove from
physical memory (RAM) when new pages need to be loaded, especially when the memory is full. This helps
manage the limited memory resources effectively.

WHY IT’S NEEDED:

When a program is running, it may need more memory than what is physically available. When the RAM is full,
and a new page needs to be loaded, the operating system must decide which existing page to remove (or "swap
out") to make space.

Common Page Replacement Algorithms:

1.First-In, First-Out (FIFO):

➢ This algorithm removes the oldest page in memory first.


➢ It treats memory pages like a queue: the first page loaded is the first one to be removed.
➢ Simple Example: Like a line at a store, the first customer in line is the first to be served.

2.Least Recently Used (LRU):

➢ This algorithm keeps track of the pages that have been used recently.
➢ When a page needs to be replaced, it removes the page that has not been used for the longest time.
➢ Simple Example: If you have a list of books you read, you’d remove the one you haven’t opened in a long
time.

3.Most Frequently Used (MFU):

➢ This algorithm also keeps track of how often each page is accessed. When a page needs to be replaced, it
removes the page that has been used the most frequently.
➢ Simple Example: If you have a collection of books, you would remove the book that you’ve read the most
times, assuming that it might be less useful now since you’ve already accessed it so often.

4.Optimal Page Replacement:

➢ This algorithm replaces the page that will not be used for the longest time in the future.
➢ It’s considered the best strategy but is hard to implement since it requires knowledge of future requests.
➢ Simple Example: If you know which books you won’t read again for the longest time, you’d get rid of
those first.
3.EXPLAIN THE FILE SYSTEM STRUCTURE IN OS
File in an operating system is a named collection of data organized in a specific format. It serves as a
fundamental unit of storage for information, providing a structured way to store, retrieve, and manage data.
Attributes of file systems are:

➢ Name – only information kept in human-readable form


➢ Identifier – unique tag (number) identifies file within file system
➢ Type – needed for systems that support different types
➢ Location – pointer to file location on device
➢ Size – current file size
➢ Protection – controls who can do reading, writing, executing
➢ Time, date, and user identification – data for protection, security, and usage monitoring
➢ Information about files is kept in the directory structure, which is maintained on the disk
➢ Many variations, including extended file attributes such as file checksum
➢ Information kept in the directory structure

OPERATIONS
A file is an abstract data type used to store data in a structured way.

Create:

➢ This operation creates a new file in the file system.


➢ It allocates space on the disk for the file and prepares it for data storage.
Write:

➢ This operation writes data to the file at the current write pointer location.
➢ The data is added to the file where the write pointer is currently positioned.
Read:

➢ This operation reads data from the file at the current read pointer location.
➢ The data is fetched from the file where the read pointer is positioned.
Seek:

➢ This operation changes the position of the read or write pointer within the file.
➢ You can move the pointer to a specific location in the file to read or write data from that position.
Delete:

➢ This operation removes a file from the file system.


➢ The file is marked as deleted, and the space it occupied is freed up for new data.
Truncate:

➢ This operation shortens a file to a specified length.


➢ The file is cut down to the specified size, and any data beyond that point is discarded.
Open (Fi):

➢ This operation opens a file for use.


➢ The system searches the directory structure on the disk for the file (Fi) and loads its information into
memory, making it accessible for reading or writing.
Close (Fi):

➢ This operation closes a file that was previously opened.


➢ The system saves any changes made to the file back to the directory structure on the disk and frees up
resources associated with the file.
ACCESSING METHODS

Accessing methods determine how data can be read from or written to files. There are two main types of
accessing methods: Sequential Access and Direct Access.

1. SEQUENTIAL ACCESS
In sequential access, data is read or written in a specific order, one record after another. You can think of it like
reading a book page by page.

Key Operations:

➢ Read Next:
This operation reads the next piece of data (or record) in the file. You start from the beginning and go to
the end, reading each record in order.

➢ Write Next:
This operation writes data to the next available spot in the file, following the order of the existing data.
You can only add new records at the end.

➢ Reset:
This operation takes you back to the beginning of the file so you can start reading from the start again.

➢ No Read After Last Write (Rewrite):


Once you write data at the end of the file, you can’t read it back unless you reset to the beginning. This
means you can’t read any new data you just added without resetting first.

2. DIRECT ACCESS
Direct access allows you to read or write data at any position in the file without going through it in order. This is
like accessing a specific page in a book without starting from the first page.

Key Operations:

➢ Read n:
This operation reads the record located at a specific position (block number) in the file. You specify which
record you want to access directly.

➢ Write n:
This operation writes data to a specific record in the file, again specified by its position (block number).

➢ Read Next:
Similar to sequential access, this reads the next record after the one currently accessed.

➢ Write Next:
This writes data to the next available record after the current one.

➢ Rewrite n:
This allows you to overwrite (change) the data in a specific record directly.

➢ n = Relative Block Number:


In direct access, "n" refers to the position of the record in the file, often counted from the beginning. For
example, if you want to access the third record, n would be 2 (since counting starts from 0).
FILE-SYSTEM STRUCTURE

➢ A file system is a way of organizing and managing files on a computer.

1.File-System Structure

➢ A file system is a way of organizing and managing files on a computer. Here’s a simple explanation of its
structure and key components:
2.File Structure

➢ A file is viewed as a logical unit for storing data, which can be read and written by users or applications.
Files hold related information, such as text documents, images, or database records. Each file is treated
as a single entity, even though it may contain many pieces of information.

3.File System Resides on Secondary Storage (Disks)

➢ The file system provides a way for users and programs to interact with storage devices. It maps logical
file operations (like opening a file) to physical operations on the disk (like finding the data on the actual
storage medium).
➢ The file system allows for easy storage, retrieval, and organization of data on disks. It ensures that users
can quickly find and access the files they need without needing to know where on the disk the files are
physically stored.

4.Disk Characteristics

➢ Disks allow data to be rewritten directly in place (modifying existing data without moving it) and enable
random access, meaning any piece of data can be accessed directly without having to read through
everything sequentially.
➢ Data is transferred to and from the disk in blocks, typically 512 bytes at a time. This is more efficient than
transferring data one byte at a time.

5.File Control Block (FCB)

➢ The FCB is a data structure that contains important information about a file, such as its name, size,
location on the disk, permissions, and timestamps. It helps the operating system manage files effectively.

6.Device Driver

➢ A device driver is software that controls the physical disk drive. It acts as a bridge between the operating
system and the hardware, translating commands from the OS into actions that the disk can perform.

7.File System Organized into Layers

➢ The file system is organized into layers, where each layer has specific functions. This modular approach
makes it easier to manage files, handle errors, and improve performance. Typically, the layers include:
1. User Interface Layer: Where users interact with files.

2. Logical Layer: Manages file organization and access.

3. Physical Layer: Deals with actual data storage on the disk.


Top Layer: Application Programs

This is where you interact with your computer. It includes things like your web browser, word
processor, games, etc. These programs make requests to the layer below.

Second Layer: Logical File System

This layer understands how files and directories are organized on your computer. It translates the
requests from application programs into a format that the layer below can understand.

Third Layer: File-Organization Module

This layer takes the requests from the logical file system and converts them into specific instructions
for accessing the data on the disk drive.

Fourth Layer: Basic File System

This layer interacts directly with the disk drive, reading and writing data as instructed by the file-
organization module.

Fifth Layer: I/O Control

This layer handles the actual communication with the hardware devices, like the disk drive,
keyboard, and monitor. It sends commands to these devices and receives data from them.

Bottom Layer: Devices

This is where the physical devices are located. The disk drive, keyboard, monitor, etc., are all part of
this layer. They carry out the instructions from the I/O control layer.
UNIT 5
1.COMPARE THE FUNCTIONALITIES OF VARIOUS DISK SCHEDULING ALGORITHMS WITH
EXAMPLE
Disk scheduling algorithms determine the order in which disk I/O requests are processed. Different
algorithms have various strategies for optimizing performance, such as minimizing wait time,
maximizing throughput, or reducing seek time.

SELECTING A DISK-SCHEDULING ALGORITHM

➢ High Workload Characteristics


➢ High Flexibility
➢ Fast Performance Goals
➢ Low Starvation Prevention
➢ Low Implementation Complexity
EXAMPLE

We illustrate scheduling algorithms with a request queue (0-199)


98, 183, 37, 122, 14, 124, 65, 67 Head pointer 53

FCFS

SSTF

SCAN
C-SCAN

C-LOOK
2.DEVELOP LINUX FILE SYSTEM IN DETAIL.
➢ The Linux File Hierarchy Structure or the Filesystem Hierarchy Standard (FHS) defines the
directory structure and directory contents in Unix-like operating systems.
➢ It is maintained by the Linux Foundation.
➢ In the FHS, all files and directories appear under the root directory /, even if they are stored
on different physical or virtual devices.
➢ Some of these directories only exist on a particular system if certain subsystems, such as the
X Window System, are installed.
➢ Most of these directories exist in all UNIX operating systems and are generally used in much
the same way; however, the descriptions here are those used specifically for the FHS, and
are not considered authoritative for platforms other than Linux.
➢ Linux supports many different file systems, but common choices for the system disk include
the ext family (such as ext2 and ext3), XFS, JFS and ReiserFS.
➢ The ext3 or third extended file system is a journaled file system and is the default file system
for many popular Linux distributions.
➢ It is an upgrade of its predecessor ext2 file system and among other things it has added the
journaling feature.
➢ A journaling file system is a file system that logs changes to a journal (usually a circular log in
a dedicated area) before committing them to the main file system. Such file systems are less
likely to become corrupted in the event of power failure or system crash.

Directory Structure

➢ Unix uses a hierarchical file system structure, much like an upside-down tree, with root (/) at
the base of the file system and all other directories spreading from there.
➢ A Unix filesystem is a collection of files and directories that has the following properties –
• It has a root directory (/) that contains other files and directories.
• Each file or directory is uniquely identified by its name, the directory in which it
resides, and a unique identifier, typically called an inode.
➢ By convention, the root directory has an inode number of 2 and the lost&plus;found
directory has an inode number of 3. Inode numbers 0 and 1 are not used. File inode
numbers can be seen by specifying the -i option to Is command.
➢ It is self-contained. There are no dependencies between one filesystem and another.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy