Os Imp Ques Sem
Os Imp Ques Sem
2M
UNIT 1
1.LIST THE ADVANTAGES AND DISADVANTAGES OF WRITING OPERATING SYSTEM IN HIGH LEVEL
LANGUAGE SUCH AS C
OS KERNEL
Operating System is a system software that The kernel is the core part of an operating
acts as an intermediary between a user and system that provides essential services for all
computer hardware to enable convenient other partsof the operating system and
usage of the system and efficient utilization of applications.
resources.
UNIT 2
1.DEFINE CPU SCHEDULING AND WHAT ARE THE 3 DIFFERENT TYPES OF SCHEDULING QUEUES
CPU scheduling is the process of deciding which task (or process) the CPU should work on at any given time.
Since the CPU can only work on one process at a time, the operating system manages a queue of tasks waiting for
CPU time. Scheduling ensures that each process gets a fair turn, improving efficiency and making sure the system
runs smoothly by switching between tasks quickly. This is especially important in systems where multiple
programs or users need to run at the same time.
Job Queue
Holds all the processes waiting to enter the system. New processes go here first.
Ready Queue
Contains processes loaded in memory and ready for CPU execution, waiting for their turn.
Solutions:
Compare-and-Swap (CAS)
This instruction compares the value at a memory location with an expected value, and if they match, it updates
the memory location to a new value. If not, the operation fails. This allows a process to check if a lock is free
(expected value) and, if so, acquire it by setting it to a new value, ensuring atomic access to shared resources.
2.DIFFERENTIATE DEADLOCK AND STARVATION AND ENLIST THE METHODS TO RECOVER FROM
DEADLOCK
DEADLOCK STARVATION
A situation where two or more processes are A situation where a process is unable to make
waiting for each other to release resources, progress indefinitely, despite the system being
resulting in a circular dependency and active and other processes making progress.
preventing any process from proceeding.
CONDITIONS CONDITIONS
➢ Mutual exclusion ➢ Priority-based scheduling
➢ Hold and wait ➢ Aging
➢ No pre-emption ➢ Resource allocation
➢ Circular wait
Process Termination
The OS terminates all processes involved in the deadlock to free up resources quickly.
Resource Pre-emption
The OS temporarily reclaims resources from some deadlocked processes and reallocates them to others.
The OS must decide which processes to pre-empt, which resources to reclaim, and ensure the system state can
be restored if needed, requiring careful handling.
3.DEFINE RACE CONDITION AND DEFINE CRITICAL SECTION PROBLEM
RACE CONDITION
A race condition occurs when multiple threads or processes are attempting to access a shared resource
simultaneously, and the outcome of the operation depends on the order in which the accesses occur. This can
lead to unpredictable and often incorrect results.
Critical section problem is to design protocol to solve this. Each process must ask permission to enter critical
section in entry section, may follow critical section with exit section, then remainder section. Consider system of
n processes {p0, p1, … pn-1}. Each process has critical section segment of code
➢ Process may be changing common variables, updating table, writing file, etc
➢ When one process in critical section, no other may be in its critical section
➢ Virtual Memory
➢ Process Isolation
➢ Dynamic Memory Allocation
➢ Memory-Mapped Files
Paging the page table helps manage memory more efficiently, especially when the page table itself is large.
Instead of keeping the entire page table in main memory, the OS divides it into smaller pages and only loads
parts of it as needed.
➢ All files are stored in one directory, making it simple to manage but difficult to organize as the number of
files increases.
Two-Level Directory
➢ Each user has a separate directory. Users can store their files in their directories, improving organization
but still somewhat limited in structure.
Tree-Structured Directory
➢ Files are organized hierarchically in a tree-like structure. This allows for subdirectories, making it easier
to manage large numbers of files and providing a more organized way to access files.
Acyclic Graph Directory
➢ Allows directories and files to have multiple parent directories, enabling shared access. This can be
efficient for storing common files in multiple locations without duplication.
General Graph Directory
➢ Similar to an acyclic graph but allows cycles (loops). This structure provides maximum flexibility but can
complicate file management and navigation.
➢ Concept: Searches for the memory block whose size is closest to the requested size, minimizing wasted
space.
➢ Pros: Efficiently utilizes memory by minimizing fragmentation.
➢ Cons: Can be computationally expensive, especially for large memory spaces.
First-Fit:
➢ Concept: Searches for the first memory block whose size is greater than or equal to the requested size.
➢ Pros: Simple and efficient to implement, often suitable for smaller memory spaces.
➢ Cons: Can lead to fragmentation, especially if many small blocks are allocated.
Worst-Fit:
➢ Concept: Searches for the largest available memory block, regardless of its size compared to the
requested size.
➢ Pros: Can potentially reduce fragmentation in some cases, especially if large blocks are frequently
allocated.
➢ Cons: Often leads to more fragmentation than best-fit or first-fit, especially for smaller memory spaces.
➢ Create
➢ Delete
UNIT 5
1.WHY ROTATIONAL LATENCY IS NOT CONSIDERED IN DISK SCHEDULING
Rotational latency is the time it takes for the desired sector of a disk to rotate under the read/write head after
the head has reached the correct track. It's the delay caused by the spinning motion of the disk that determines
when the data can be accessed.
Rotational latency is often not considered in disk scheduling for these simple reasons:
1. Less Impact: The time it takes for the disk to spin to the right position (rotational latency) is usually less
important than the time it takes for the read/write head to move to the correct track (seek time).
2. Predictable Timing: Once the head is over the right track, the time it takes for the correct data to come
around is fairly constant and can be averaged out.
3. High-Level Focus: Disk scheduling usually looks at broader access patterns rather than focusing on tiny
details like the spin of the disk, which makes it easier to manage.
4. Caching Benefits: Modern systems use caching to reduce the number of times they need to access the
disk, further reducing the effect of rotational latency.
Operating System is a system software that acts as an intermediary between a user and Computer Hardware to
enable convenient usage of the system and efficient utilization of resources.
The commonly required resources are input/output devices, memory, file storage space, CPU etc. Also, an
operating system is a program designed to run other programs on a computer.
OS is considered as the backbone of a computer, managing both software and hardware resources. They are
responsible for everything from the control and allocation of memory to recognizing input from external devices
and transmitting output to computer displays.
GOALS
1. To execute user programs
2. Make solving user problems easier.
3. Make the computer system convenient to use.
4. Use the computer hardware in an efficient manner.
CLASSIFICATION
➢ Multi-user OS
➢ Multiprocessing OS
➢ Multitasking OS
➢ Multithreading OS
➢ Real time OS
OPERATING SYSTEM STRUCTURES
An OS provides the environment within which programs are executed. Internally, Operating Systems vary greatly
in their makeup, being organized along many different lines. The design of a new OS is a major task. The goals of
the system must be well defined before the design begins. The type of system desired is the basis for choices
among various algorithms and strategies. An OS may be viewed from several vantage ways:
1.SIMPLE STRUCTURE
Many commercial systems do not have well-defined structures. Frequently, such operating systems started as
small, simple, and limited systems and then grew beyond their original scope. MS-DOS is an example of such a
system. It was originally designed and implemented by a few people who had no idea that it would become so
popular. Eg: MS-DOS is written to provide the most functionality in the least space. It is divided into modules.
2.LAYERED STRUCTURE
In an Operating System (OS), the layered structure organizes the system into distinct levels, each responsible for
specific tasks. Each layer communicates only with the layer directly beneath it, promoting modularity. Each layer
has specific tasks. Each layer communicates only with the layer below it. Changes to one layer don’t affect others.
Layers in an OS:
A microkernel is a type of operating system design that keeps the core (kernel) small and only handles essential
tasks, like process management and memory handling. Everything else, such as device drivers and file systems,
runs outside the kernel in user space. Eg: Tru64 UNIX
4.MODULAR
In a modular operating system (OS) design, the system is broken down into independent, interchangeable
components or modules. Each module is responsible for a specific task and interacts with other modules through
well-defined interfaces. You can replace or extend individual modules without changing the whole system. This
makes it easier to update the system or add new features. Eg: Solaris OS
SYSTEM COMPONENTS
System components in an operating system (OS) refer to the various parts that work together to manage
hardware and software resources, enabling users and applications to interact with the system. These
components ensure that the system is efficient, stable, and secure.
Process Management:
• Allocates and deallocates memory, and decides which processes to load into memory.
File Management:
• Handles creation, deletion, and access of files, as well as mapping files to storage devices and backups.
• Provides an interface for device drivers and handles buffering and caching.
Secondary-Storage Management:
Networking:
• Handles data transfer, network protocols, and ensures security and routing.
Protection System:
• Translates commands into system actions like process management, I/O, memory, file handling, and
networking.
OS SERVICES
An operating system (OS) provides a set of services that help manage resources and make the execution of
programs easier for both the user and the programmer. These services may vary between different OSes, but
here are the most common ones:
Program Execution:
I/O Operations:
➢ The OS handles input and output (I/O) operations for the running program.
➢ Direct hardware access is usually restricted, so the OS provides mechanisms to perform I/O operations
like reading/writing data to files or devices.
➢ The OS allows programs to read, write, create, delete, and manage files and directories.
➢ It also handles file permissions and ensures that files are accessed efficiently.
Communication:
Error Detection:
Resource Allocation:
➢ The OS allocates system resources (e.g., CPU time, memory, I/O devices) among multiple users or
processes.
➢ It ensures that each process gets the resources it needs without conflicts, especially in multi-user or
multi-tasking environments.
Accounting:
➢ Protection ensures that processes cannot interfere with each other’s memory or resources.
➢ Security protects the system from unauthorized access, often through mechanisms like user
authentication, encryption, and firewalls.
➢ The OS manages both protection (e.g., access control) and security to prevent data breaches.
2.OUTLINE ABOUT DEVICE MANAGEMENT IN OS 6M
Device Management is the part of the operating system responsible for managing hardware devices. It ensures
that devices are properly controlled, monitored, and utilized by the system and applications.
Functions:
Device Allocation:
➢ Assigns devices to processes when needed and manages multiple requests for the same device.
I/O Scheduling:
➢ Organizes and prioritizes input/output requests to ensure efficient device access and minimize waiting
time.
Interrupt Handling:
➢ Responds to interrupts from devices, signaling the OS that a device needs attention or has completed a
task.
Error Handling:
➢ Detects and manages errors related to devices, ensuring that issues are addressed without affecting the
overall system.
Approaches:
As an example of how system calls are used, consider writing a simple program to read data from one file and to
copy them to another file. The first input that the program will need is the names of the two files:
When the program tries to open the file, it may find that no file of that name exists or that the file is protected
against access. In these cases, the program should print a message on the console and then terminate
abnormally.
➢ If the input file exists, then we must create a new output file.
➢ We may find an output file with the same name.
➢ This situation may cause the program to abort (a system call), or we may delete the existing file (another
system call).
Now that both the files are setup, we enter a loop that reads from the input file (a system call) and writes to the
output file (another system call). Each read and write must return status information regarding various possible
error conditions.
On input, the program may find that the end of file has been reached, or that a hardware failure occurred in the
read (such as a parity error).
On output, Various errors may occur, depending on the output device (such as no more disk space, physical end
of tape, printer out of paper).
Finally, after the entire file is copied, the program may close both files (another system call), writes a message to
the console (more system calls), and finally terminates normally (the final system call).
THREADS
A thread is a basic unit of CPU utilization. A thread, sometimes called as light weight process whereasa
process is a heavyweight process. Thread comprises of:
➢ A thread ID
➢ A program counter
➢ A register set
➢ A stack.
Traditionally a process contained only a single thread of control as it ran, many modern operating
systems have extended the process concept to allow a process to have multiple threads of executionand
thus to perform more than one task at a time.
BENEFITS
➢ Responsiveness
➢ Resource sharing
➢ Economy
➢ Utilization of multiprocessor architectures
User-level threads are created, scheduled, and Kernel threads are created,
managed by a thread library within user space, scheduled, and managed directly
without kernel involvement. This makes their by the operating system kernel.
management generally faster and more Thisinvolves more overhead
efficient. compared to user threads.
➢ A process is mainly a program in execution where the execution of a process must progress in a
sequential order or based on some priority or algorithms.
➢ In other words, it is an entity that represents the fundamental working that has been assigned to a
system.
➢ When a program gets loaded into the memory, it is said to as process. This processing can be categorized
into 4 sections.
➢ STACK - The process Stack contains the temporary data such as method/function parameters, return-
address and local variables.
➢ HEAP - This is dynamically allocated memory to a process during its run time.
➢ TEXT - This includes the current activity represented by the value of Program Counter and the contents
of the processor's registers.
➢ DATA - This section contains the global and static variables.
PROCESS ID
POINTER
PROGRAM COUNTER
Program Counter is a pointer to the address of the next instruction to be executed for this process.
CPU REGISTERS
Various CPU registers where process need to be stored for execution for running state.
Process priority and other scheduling information which is required to schedule the process.
ACCOUNTING INFORMATION
This includes the amount of CPU used for process execution, time limits, execution ID etc.
IO STATUS INFORMATION
The architecture of a PCB is completely dependent on Operating System and may contain different information in
different operating systems. (shown in above diagram)
OPERATIONS ON PROCESS
Creation:
➢ A process may need to wait for a resource (like I/O operations) before it can continue.
➢ The process state changes to "waiting," and it is temporarily removed from the CPU until the resource is
available.
Termination:
COOPERATING PROCESS
Cooperating processes are processes that work together to complete a task. They often share data and resources.
➢ When multiple processes access shared resources, synchronization ensures that they do not interfere
with each other.
➢ Techniques like semaphores, mutexes, and locks help manage access to shared resources, preventing
data corruption or inconsistency.
Deadlock Prevention:
➢ In a system where processes wait indefinitely for resources, a deadlock can occur.
➢ Strategies like resource ordering, hold and wait conditions, or using a timeout can prevent deadlocks,
ensuring that cooperating processes continue to make progress.
Process Sharing:
FCFS
➢ ARRIVAL TIME: Time taken for the arrival of each process in the CPU Scheduling Queue.
➢ COMPLETION TIME: Time taken for the execution to complete, starting from arrival time.
➢ TURN AROUND TIME: Time taken to complete after arrival.
➢ WAITING TIME: Total time the process has to wait before it begins execution.
First-Come, First-Served (FCFS) is a non-pre-emptive scheduling algorithm where the process that arrives first in
the ready queue is the one that gets executed first. In other words, the CPU processes requests in the order they
arrive, much like a queue at a bank or a ticket counter.
NON-PRE-EMPTIVE SJF
ROUND-ROBIN
1.EXPLAIN THE SOLUTION FOR CRITICAL SECTION PROBLEM WITH SUITABLE ALG’S
The Critical Section Problem refers to a situation in concurrent programming where multiple processes (or
threads) need to access a shared resource (like a variable, file, or hardware) at the same time. The problem arises
when processes must work together without interfering with each other.
Mutual Exclusion -If process Pi is executing in its critical section, then no other processes can be executing in their
critical sections
Progress -If no process is executing in its critical section and there exist some processes that wish to enter their
critical section, then the selection of the processes that will enter the critical section next cannot be postponed
indefinitely
Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their
critical sections after a process has made a request to enter its critical section and before that request is granted
SHARED VARIABLES
➢ Executed atomically
➢ Returns the original value of passed parameter
➢ Set the new value of passed parameter to “TRUE”.
boolean test_and_set (boolean *target)
{boolean rv = *target;
*target = TRUE;
return rv:}
do {while (test_and_set(&lock))
; /* do nothing */
/* critical section */
lock = false;
/* remainder section */
} while (true);
2.COMPARE_AND_SWAP INSTRUCTION
➢ Executed atomically
➢ Returns the original value of passed parameter “value”
➢ Set the variable “value” the value of the passed parameter “new_value” but only if “value”
==“expected”. That is, the swap takes place only under this condition.
int compare _and_swap(int *value, int expected, int new_value) {
int temp = *value;
if (*value == expected)
*value = new_value;
return temp; }
do {while (compare_and_swap(&lock, 0, 1) != 0)
; /* do nothing */
/* critical section */
lock = 0;
/* remainder section */
} while (true);
3.SEMAPHORE IMPLEMENTATION
Semaphores are synchronization primitives that can be used to manage access to shared resources. They can help
solve the critical section problem by ensuring mutual exclusion, allowing only one thread to enter the critical
section while others wait.
wait(semaphore *S)
{ S->value--;
if (S->value < 0) {
add this process to S->list;
block(); } }
signal(semaphore *S)
{ S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P); }}
2.EXPLAIN DEADLOCK AVOIDANCE AND SOLVE A BANKER’S ALGORITHM PROBLEM
DEADLOCK AVOIDANCE
Deadlock avoidance is a strategy used in operating systems to prevent deadlocks from occurring. A deadlock is a
situation where two or more processes are unable to proceed because each is waiting for the other to release a
resource. Deadlock avoidance ensures that the system never enters a deadlock state.
The Banker's Algorithm is a resource allocation algorithm used in operating systems to ensure that a system is in
a safe state, preventing deadlocks. It's named after a banking analogy where a bank must allocate loans to
customers while ensuring that the bank remains solvent.
Developed by Edsger Dijkstra, this algorithm checks resource requests against the maximum needs of processes.
It determines whether granting the request will leave the system in a safe state. It uses the concepts of maximum
demand, current allocation, and available resources to make decisions.
Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of resource type Rj
Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to complete its task
EXPLANATION
➢ Resource Allocation: The algorithm manages the allocation of resources to processes in a system.
➢ Safe State: A safe state is a condition where there exists a sequence of processes that can complete their
execution without causing a deadlock.
➢ Resource Matrix: The algorithm uses a matrix to represent the available resources, allocated resources,
and maximum resource needs of each process.
➢ Need Calculation: The Need matrix is calculated by subtracting the allocated resources from the
maximum resource needs.
➢ Safety Check: The algorithm iteratively checks for a process whose Need is less than or equal to the
available resources. If found, the process is marked as finished and its resources are released.
➢ Safe Sequence: If all processes can be marked as finished, a safe sequence exists, indicating that no
deadlock will occur
UNIT 4
1.EXPLAIN HOW PAGING SUPPORTS VIRTUAL MEMORY. WITH NEAT DIAGRAM EXPLAIN HOW IS
LOGICAL ADDRESS TRANSLATED INTO PHYSICAL ADDRESS?
PAGING
Paging is a memory management technique used by operating systems to manage how data is stored and
retrieved in memory. It allows the operating system to break up physical memory into fixed-size blocks called
pages and maps these pages to physical memory frames.
VITUAL MEMORY
Virtual memory is a memory management technique that allows a computer to use more memory than is
physically available by using disk space as an extension of RAM.
Each process has its own logical address space, which is divided into fixed-size units called pages. For example, if
a process needs 4 GB of memory, it can be divided into pages (e.g., 4 KB each).
Physical memory (RAM) is divided into frames of the same size as the pages. This means that the operating
system can load pages into any available frame in RAM.
Page Table:
The operating system maintains a page table for each process. This table keeps track of which pages are currently
loaded in physical memory and where they are located.
If a page is not in physical memory (a situation known as a page fault), the operating system retrieves it from disk
storage (where it’s stored temporarily).
With virtual memory, the system only loads the pages that are currently needed into RAM. This means that not
all pages of a program need to be in memory at once. For instance, if a program uses 8 pages but only 4 are
currently needed, only those 4 pages are loaded into RAM.
By allowing processes to use more memory than what is physically available and loading pages as needed, paging
makes better use of the available RAM and helps run larger applications smoothly.
Swapping:
When physical memory is full, the operating system can swap out less-used pages to disk (known as paging out)
to make room for new pages that need to be loaded into RAM. When the swapped-out pages are needed again,
they can be loaded back into memory (known as paging in).
TRANSLATING LOGICAL ADDRESS TO PHYSICAL ADDRESS
In an operating system that uses paging, the translation of a logical address to a physical address involves several
steps.
➢ The CPU needs to read data and uses a logical address to request it. Think of this address as a reference
number that tells the system which data is needed.
Finding the Page Table:
➢ The page table provides the frame number, which is like a shelf number in a library where the data can
be found. This frame number points to the location in physical memory.
Calculating the Physical Address:
➢ To find the exact physical address in RAM, the system combines the frame number with the offset from
the logical address.
➢ The formula is:
➢ Physical Address = (Frame Number × Frame Size) + Offset
Reading the Data:
Now, the CPU goes to the calculated physical address in RAM and reads the data it requested.
Process Isolation:
Each process has its own page table, ensuring that one process cannot access or change the memory of another
process.
Virtual Memory:
The operating system can give the illusion of having more memory than is physically available by using
techniques like paging and swapping data in and out of memory.
2.EXPLAIN PAGE REPLACEMENT ALGORITHM WITH AN EXAMPLE
A page replacement algorithm is a method used by the operating system to decide which pages to remove from
physical memory (RAM) when new pages need to be loaded, especially when the memory is full. This helps
manage the limited memory resources effectively.
When a program is running, it may need more memory than what is physically available. When the RAM is full,
and a new page needs to be loaded, the operating system must decide which existing page to remove (or "swap
out") to make space.
➢ This algorithm keeps track of the pages that have been used recently.
➢ When a page needs to be replaced, it removes the page that has not been used for the longest time.
➢ Simple Example: If you have a list of books you read, you’d remove the one you haven’t opened in a long
time.
➢ This algorithm also keeps track of how often each page is accessed. When a page needs to be replaced, it
removes the page that has been used the most frequently.
➢ Simple Example: If you have a collection of books, you would remove the book that you’ve read the most
times, assuming that it might be less useful now since you’ve already accessed it so often.
➢ This algorithm replaces the page that will not be used for the longest time in the future.
➢ It’s considered the best strategy but is hard to implement since it requires knowledge of future requests.
➢ Simple Example: If you know which books you won’t read again for the longest time, you’d get rid of
those first.
3.EXPLAIN THE FILE SYSTEM STRUCTURE IN OS
File in an operating system is a named collection of data organized in a specific format. It serves as a
fundamental unit of storage for information, providing a structured way to store, retrieve, and manage data.
Attributes of file systems are:
OPERATIONS
A file is an abstract data type used to store data in a structured way.
Create:
➢ This operation writes data to the file at the current write pointer location.
➢ The data is added to the file where the write pointer is currently positioned.
Read:
➢ This operation reads data from the file at the current read pointer location.
➢ The data is fetched from the file where the read pointer is positioned.
Seek:
➢ This operation changes the position of the read or write pointer within the file.
➢ You can move the pointer to a specific location in the file to read or write data from that position.
Delete:
Accessing methods determine how data can be read from or written to files. There are two main types of
accessing methods: Sequential Access and Direct Access.
1. SEQUENTIAL ACCESS
In sequential access, data is read or written in a specific order, one record after another. You can think of it like
reading a book page by page.
Key Operations:
➢ Read Next:
This operation reads the next piece of data (or record) in the file. You start from the beginning and go to
the end, reading each record in order.
➢ Write Next:
This operation writes data to the next available spot in the file, following the order of the existing data.
You can only add new records at the end.
➢ Reset:
This operation takes you back to the beginning of the file so you can start reading from the start again.
2. DIRECT ACCESS
Direct access allows you to read or write data at any position in the file without going through it in order. This is
like accessing a specific page in a book without starting from the first page.
Key Operations:
➢ Read n:
This operation reads the record located at a specific position (block number) in the file. You specify which
record you want to access directly.
➢ Write n:
This operation writes data to a specific record in the file, again specified by its position (block number).
➢ Read Next:
Similar to sequential access, this reads the next record after the one currently accessed.
➢ Write Next:
This writes data to the next available record after the current one.
➢ Rewrite n:
This allows you to overwrite (change) the data in a specific record directly.
1.File-System Structure
➢ A file system is a way of organizing and managing files on a computer. Here’s a simple explanation of its
structure and key components:
2.File Structure
➢ A file is viewed as a logical unit for storing data, which can be read and written by users or applications.
Files hold related information, such as text documents, images, or database records. Each file is treated
as a single entity, even though it may contain many pieces of information.
➢ The file system provides a way for users and programs to interact with storage devices. It maps logical
file operations (like opening a file) to physical operations on the disk (like finding the data on the actual
storage medium).
➢ The file system allows for easy storage, retrieval, and organization of data on disks. It ensures that users
can quickly find and access the files they need without needing to know where on the disk the files are
physically stored.
4.Disk Characteristics
➢ Disks allow data to be rewritten directly in place (modifying existing data without moving it) and enable
random access, meaning any piece of data can be accessed directly without having to read through
everything sequentially.
➢ Data is transferred to and from the disk in blocks, typically 512 bytes at a time. This is more efficient than
transferring data one byte at a time.
➢ The FCB is a data structure that contains important information about a file, such as its name, size,
location on the disk, permissions, and timestamps. It helps the operating system manage files effectively.
6.Device Driver
➢ A device driver is software that controls the physical disk drive. It acts as a bridge between the operating
system and the hardware, translating commands from the OS into actions that the disk can perform.
➢ The file system is organized into layers, where each layer has specific functions. This modular approach
makes it easier to manage files, handle errors, and improve performance. Typically, the layers include:
1. User Interface Layer: Where users interact with files.
This is where you interact with your computer. It includes things like your web browser, word
processor, games, etc. These programs make requests to the layer below.
This layer understands how files and directories are organized on your computer. It translates the
requests from application programs into a format that the layer below can understand.
This layer takes the requests from the logical file system and converts them into specific instructions
for accessing the data on the disk drive.
This layer interacts directly with the disk drive, reading and writing data as instructed by the file-
organization module.
This layer handles the actual communication with the hardware devices, like the disk drive,
keyboard, and monitor. It sends commands to these devices and receives data from them.
This is where the physical devices are located. The disk drive, keyboard, monitor, etc., are all part of
this layer. They carry out the instructions from the I/O control layer.
UNIT 5
1.COMPARE THE FUNCTIONALITIES OF VARIOUS DISK SCHEDULING ALGORITHMS WITH
EXAMPLE
Disk scheduling algorithms determine the order in which disk I/O requests are processed. Different
algorithms have various strategies for optimizing performance, such as minimizing wait time,
maximizing throughput, or reducing seek time.
FCFS
SSTF
SCAN
C-SCAN
C-LOOK
2.DEVELOP LINUX FILE SYSTEM IN DETAIL.
➢ The Linux File Hierarchy Structure or the Filesystem Hierarchy Standard (FHS) defines the
directory structure and directory contents in Unix-like operating systems.
➢ It is maintained by the Linux Foundation.
➢ In the FHS, all files and directories appear under the root directory /, even if they are stored
on different physical or virtual devices.
➢ Some of these directories only exist on a particular system if certain subsystems, such as the
X Window System, are installed.
➢ Most of these directories exist in all UNIX operating systems and are generally used in much
the same way; however, the descriptions here are those used specifically for the FHS, and
are not considered authoritative for platforms other than Linux.
➢ Linux supports many different file systems, but common choices for the system disk include
the ext family (such as ext2 and ext3), XFS, JFS and ReiserFS.
➢ The ext3 or third extended file system is a journaled file system and is the default file system
for many popular Linux distributions.
➢ It is an upgrade of its predecessor ext2 file system and among other things it has added the
journaling feature.
➢ A journaling file system is a file system that logs changes to a journal (usually a circular log in
a dedicated area) before committing them to the main file system. Such file systems are less
likely to become corrupted in the event of power failure or system crash.
Directory Structure
➢ Unix uses a hierarchical file system structure, much like an upside-down tree, with root (/) at
the base of the file system and all other directories spreading from there.
➢ A Unix filesystem is a collection of files and directories that has the following properties –
• It has a root directory (/) that contains other files and directories.
• Each file or directory is uniquely identified by its name, the directory in which it
resides, and a unique identifier, typically called an inode.
➢ By convention, the root directory has an inode number of 2 and the lost+found
directory has an inode number of 3. Inode numbers 0 and 1 are not used. File inode
numbers can be seen by specifying the -i option to Is command.
➢ It is self-contained. There are no dependencies between one filesystem and another.