0% found this document useful (0 votes)
83 views

Operating System B.U

Operating system text book as per nep

Uploaded by

Sanjay Dilip
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views

Operating System B.U

Operating system text book as per nep

Uploaded by

Sanjay Dilip
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

III SEMESTER B.C.

A DEGREE EXAMINATION,
MARCH/APRIL-2023
BANGALORE UNIVERSITY

PART-A
I. Answer any Four questions each question carries 2 marks

1. What is Operating System? Mention any two functions of OS.


An operation system is a program on which application programs are executed
and acts as a communication bridge (interface) between the user and the
computer hardware.
Functions of Operating System

• Process Management
• File Management

2. Mention the components of OS.


• Process Management
• File Management
• Memory Management
• Secondary Storage Management
• 1/0 Device Management
• Network Management
• Security Management
• Command Interpreter System

3. What is Deadlock? Give an example.


Dead-Lock is a situation where two or more processors are waiting for some
event to happen, but such events that don't happen is a deadlock condition, and
the processors are said to be in a deadlock state.
EXAMPLE:
A computer has three USB drives and three processes. Each of the three
processes. Each of the USB drives. So, when each process
Deepak .M
Assistant Professor
Baldwin Methodist College
another drive, the three processes will have the deadlock situation as each
process will be waiting for the USB drive to release, which is currently in use.

4. Define Page Fault and Fragmentation.


Fault Page: A page fault requests occurs when a program attempts to access data
or code that is in its address space, but is not currently located in the system
RAM.

Fragmentation: Fragmentation refers to an unwanted problem that occurs in the


OS in which a process is unloaded and loaded from memory, and the free
memory space gets fragmented.

5. Mention the various types of Files.


Text file: Lines or pages, which are sequence of characters.
Source file: it consists of subroutines and functions which consists of
declarations followed by executable statement.

Object file: it contains sequence of bytes forming blocks used by linker.


Executable file: consists of series of code which brought into the memory by the
loader or execution.

6. What is Disk Formatting?


An unformatted blank magnetic disk is just a disk made of magnetic recording
material. To store data on a disk it must be divided into sectors. This process is
also known as Low-level formatting or Physical formatting.
PART-B
II. Answer any Four questions each question carries 5 marks

7.Explain different states of a process with neat diagram.


Process is program in execution. When an executable file of a program is loaded
into memory, it becomes process.
Each process is in one of the states. The states are listed below.

• New: the process is created in this state


• Running: the process is performing its activities or executing in this state
Deepak .M
Assistant Professor
Baldwin Methodist College
• Waiting: the process is waiting for the occurrence of some event in this
state.
• Ready: the process is prepared to execute in the processor when given an
opportunity.
• Terminated: the process has terminated execution.

8. Explain Critical Section Problem.


The critical section is a code segment that a signal process can access at a
particular point of time. It contains the shared data resources that can be accessed
by other processes.
Rules for Critical Section Problem
There are three rules that need to be enforced in the critical section. They are:
1 . Mutual Exclusion: A special type of binary semaphore used to control access
to shared resources. It has a priority inheritance mechanism that helps avoid
extended priority inversion problems. Only one process can execute at a time in
the critical section.
2. Progress: We use it when the critical section is empty, and a process wants to
enter it. The processes that are not present in their reminder section decide who
should go in, within a finite time.
3.Bound Waiting: Only a specific number of processes are allowed into their
critical section. Thus, a process needs to make a request when it wants to enter
the critical section and when the critical section reaches its limit, the system
allows the process' request and allows it into its critical section.
Two general approaches are used to handle critical sections in operating systems:

Deepak .M
Assistant Professor
Baldwin Methodist College
Preemptive Kernels: A preemptive kernel is a kernel form that permits a process
to replaced when it is in kernel mode. A preemptive kernel allows a process to be
preemptes while it is running in kernel mode kernel-mode process will run until
it exits kernel mode blocks, or voluntarily yields control of the CPU.
Non-Preemptive Kernels: A non-preemptive kernel does not allow a process
running is kernel mode to be pre-empted. A non preemptive kernel is essentially
free from race conditions on kernel data structures, as only one process is active
in the kernel at a time. Some instances of the preemptive kernel are IRIX, Linux,
and Solaris OS.
Common Solutions to the Critical Section Problem
 Peterson's Solution
 Mutex Locks
 Synchronization Hardware
 Semaphore Solution

9. What is Semaphore? Explain different types of Semaphores.


Semaphore is an integer variable with non-negative values, which can be
accessed only through two standard atomic operations: wait() and signal().
Types of Semaphores
1. Counting Semaphores:
These are integer value semaphores and have an unrestricted value domain.
These semaphores are used to coordinate the resource access, where the
semaphore count is the number of available resources. If the resources are added,
semaphore count automatically incremented and if the resources are removed,
the count is decremented. If the initial count = 0, the counting semaphore should
be created in the unavailable state. However, If the count is > 0, the semaphore is
created in the available state, and the number of tokens it has equals to its count.

Deepak .M
Assistant Professor
Baldwin Methodist College
2.Binary Semaphores:
The binary semaphores are like counting semaphores but their value is restricted
to 0 and 1 The wait operation only works when the semaphore is 1 and the signal
operation succeeds when semaphore is 0. It is sometimes easier to implement
binary semaphores than counting semaphores.

• Binary semaphores are mainly used for two purposes-To ensure mutual
exclusion and to implement the order in which the processes must execute.

10. Explain first fit, best fit and worst fit allocation of memory.
First-Fit Memory Allocation Technique:
This is a very basic strategy in which we start from the beginning and allot the
first hole, which is big enough as per the requirements of the process. The firstfit
strategy can also be implemented in a way where we can start our search for the
first-fit hole from the place we left off last time.
 Advantages of First-Fit Memory Allocation
• It is fast in processing. As the processor allocates the nearest available
memory partition to the job, it is very fast in execution.  Disadvantages
of First-Fit Memory Allocation • It wastes a lot of memory.
• The processor ignores if the size of partition allocated to the job is very
large as compared to the size of job or not. It just allocates the memory.
Best-Fit Memory Allocation Technique
This is a greedy strategy that aims to reduce any memory wasted because of
internal fragmentation in the case of static partitioning, and hence we allot that
Deepak .M
Assistant Professor
Baldwin Methodist College
hole to the process, which is the smallest hole that fits the requirements of the
process. Hence, we need to first sort the holes according to their sizes and pick
the best fit for the process without wasting memory.
 Advantages of Best-Fit Memory Allocation Technique
• Memory Efficient. The operating system allocates the job minimum
possible space in the memory, making memory management very
efficient. To save memory from getting wasted, it is the best method.
Disadvantages of Best-Fit Memory Allocation Technique

• It is a Slow Process. Checking the whole memory for each job makes the
working of the operating system very slow. It takes a lot of time to
complete the work.
Worst-Fit Memory Allocation Technique
This strategy is the opposite of the Best-Fit strategy. We sort the holes
according to their sizes and choose the largest hole to be allotted to the
incoming process. The idea behind this allocation is that as the process is
allotted a large hole, it will have a lot of space left behind as internal
fragmentation. Hence, this will create a hole that will be large enough to
accommodate a few other processes. Advantages of Worst-Fit Allocation
• Since this process chooses the largest hole/partition, therefore there will be
large internal fragmentation. Now, this internal fragmentation will be quite
big so that other small processes can also be placed in that leftover
partition
Disadvantages of Worst-Fit Allocation

• It is a slow process because it traverses all the partitions in the memory


and then selects the largest partition among all the partitions, which is a
time-consuming process.

11.Explain various File Accessing Methods.


The various file access methods are:

• Sequential Access Method


Among all the access methods, it is considered the simplest method.
Deepak .M
Assistant Professor
Baldwin Methodist College
Processing is carried out with the use of two operations namely, read and write.
Read operation is responsible for reading the next portion of the file and after a
successful read of the record, the pointer proceeds automatically to the next
record which tracks the I/O location. Write operation is responsible for writing at
the end of the file and shifts the pointer towards the end of the newly added
record.
Advantages of Sequential Access Method

• This method of file access is easy to implement.


• It provides fast access to the next record using lexicographic order.
Disadvantages of Sequential Access Method

• This type of file access method is slow if the file record to be accessed
next is not present next to the current record.
• Inserting a new record may require moving a large proportion of the file.
Direct Access Method
This access method is also called real-time access where the records can be read
irrespective of their sequence. This means they can be accessed as a file that is
accessed from a disk where each record carries a sequence number. For example,
block 40 can be accessed first followed by block 10 and then block 30, and so on.
This eliminates the need for sequential read or write operations.
A major example of this type of file access is the database where the
corresponding block of the query is accessed directly for instant response. This
type of access saves a lot of time when a large amount of data is present. In such
cases, hash functions and index methods are used to search for a particular block.
Advantages of Direct Access Method

• The files can be immediately accessed decreasing the average access time.
• In the direct access method, in order to access a block, there is no need of
traversing all the blocks present before it.
Indexed Access Method
This method is typically an advancement in the direct access method which is
the consideration of index. A particular record is accessed by browsing through

Deepak .M
Assistant Professor
Baldwin Methodist College
the index and the file is accessed directly with the use of a pointers or addresses
present in the index as shown below.
To understand the concept, consider a book store where the database contains a
12-digit ISBN and a four-digit product price. If the disk can carry 2048 (2kb) of
bytes per block, then 128 records of 16 bytes (12 for ISBN and 4 for price) can
be stored in a single block. This results in a file carrying 128000 records to be
reduced to 1000 blocks to be considered in the index each entry carrying 10
digits. To find the price of a book binary search can be performed over the index
with which the block carrying that book can be identified.
A drawback of this method is, it is considered ineffective in the case of a larger
database with very large files which results in making the index too large.
Indexed Sequential Access Method
To overcome the drawback associated with indexed access, this method is used
where an index of an Index is created. Primary index points to the secondary
index and the secondary index points to the actual data items. An example of
such a method is ISAM (Indexed-Sequential Access Method) of IBM which
carries two types of indexes. They are a master index and secondary index.
Master index carries pointers to the secondary index whereas, the secondary
carries blocks that point directly to the disk blocks. Two binary searches are
performed to access a data item. The first one is performed on a master index and
the second one on the secondary index. This type of method is a two direct
access read method.

12. Explain the Disk Structure.


The traditional head-sector-cylinder, HSC numbers are mapped to linear block
addresses by numbering the first sector on the first head on the outermost track as
sector 0. Numbering proceeds with the rest of the sectors on that same track, and
then the rest of the tracks on the same cylinder before proceeding through the rest
of the cylinders to the center of the disk In modern practice these linear block
addresses are used in place of the HSC numbers for a variety of reasons:

➤ The linear length of tracks near the outer edge of the disk is much longer than
for those tracks located near the center, and therefore it is possible to squeeze
many more sectors onto outer tracks than onto inner ones.
Deepak .M
Assistant Professor
Baldwin Methodist College
➤ All disks have some bad sectors, and therefore disks maintain a few spare
sectors that can be used in place of the bad ones. The mapping of spare sectors to
bad sectors in managed internally to the disk controller.

➤ Modern hard drives can have thousands of cylinders, and hundreds of sectors
per track on their outermost tracks. These numbers exceed the range of HSC
numbers for many (older) operating systems, and therefore disks can be
configured for any convenient combination of HSC values that falls within the
total number of sectors physically on the drive.
• There is a limit to how closely packed individual bits can be placed on a
physical media, but that limit is growing increasingly more packed as
technological advances are made.
• Modern disks pack many more sectors into outer cylinders than inner
ones, using one of two approaches:

➤ With Constant Linear Velocity, CLV, the density of bits is uniform from
cylinder to cylinder. Because there are more sectors in outer cylinders, the disk
spins slower when reading those cylinders, causing the rate of bits passing under
the read-write head to remain constant. This is the approach used by modern CDs
and DVDs.

➤ With*Constant Angular Velocity, CAV, the disk rotates at a constant angular


speed, with the bit density decreasing on outer cylinders. (These disks would
have a constant number of sectors per track on all cylinders.)
Each modern disk contains concentric tracks and each track is divided into
multiple sectors. The disks are usually arranged as a one dimensional array of
blocks, where blocks are the smallest storage unit. Blocks can also be called as
sectors. For each surface of the disk, there is a read/write desk available. The
same tracks on all the surfaces is known as a cylinder.
PART-C

III. Answer any Four questions each question carries 5 marks 13. (a)

Explain real time and time sharing operating system.


(b) Write a note on PCB.
Deepak .M
Assistant Professor
Baldwin Methodist College
(a) It processes data within a very short amount of time that
means the data immediately processes without any delay and
provides the output.

• The response is done quickly.


• It is used in real time where there are strict time needs like missile system
robots.
Examples: Flight tickets , scientific controls system, industrial experiments, air
traffic control.
Types Of Real-Time Operating System:

• Hard Real-Time Systems: Hard real-time systems include time restrictions


and ensure that critical tasks complete on time.
• Soft Real-Time Systems: Soft real-time systems are less restrictive as
compared to the hard real-time systems. The soft real-time task gets
preference over other tasks and maintains the priority until it completes.
Advantages of Real-Time Operating System

• Maximum use of devices and system thus gives more output from all the
resources.

• Time given for shifting tasks is very less.


• It focuses on running applications and gives less importance to applications in
queue.

• Size of programs are small.

• Error free.

• Memory allocation is well managed.


Disadvantages of Real-Time Operating System
• Real-time operating systems are very costly to develop.
• Real-time operating systems are very complex and can consume critical CPU
cycles.

Deepak .M
Assistant Professor
Baldwin Methodist College
• They are very less prone to switching tasks.
Time Sharing Operating System
Time-sharing is a logical extension of multiprogramming. A time-shared
operating system allows many users to share the computer simultaneously. The
system switches rapidly form one user to the next and hence the user is given an
impression that the entire computer system is dedicated to his use, though t is
being shared among many users.

Time-sharing OS provides the following features for users:

• Each user grabs dedicated time for all operations.

• Multiple online users can use the same computer at the same time.
• End-users feel that they monopolize the computer system.

• Better interaction between users and computers.

• User requests can make in small-time responses.

• It does not need longer have to wait for the last task to end to get processor.

• It can make quick processing with a lot of tasks. Advantages of Time-Sharing


Operating System

• It provides a quick response.

Deepak .M
Assistant Professor
Baldwin Methodist College
• Reduces CPU idle time

• All the tasks are given a specific time.

• Enables resource sharing.

• Improves response time.


• Easy to use and user friendly.
Disadvantages of Time-Sharing Operating System
• It consumes many resources

• Requires high specification of hardware

• It has a problem of reliability

• An issue with the security and integrity of user programs and data

• Probability of data communication problem.

(b) Process Control Block


Each process is represented in the operating system by a process control block
(PCB)-also called a task control block. The operating system must obtain all the
information pertaining to a process that has been created for efficient
management of the process. This data structure is known as the process control
block or the PCB. The figure below depicts a typical process control block.

Deepak .M
Assistant Professor
Baldwin Methodist College
PCB contains many pieces of information associated with a specific process,
including these:

• Process ID: Every process that is created in the system should be given a
unique identification number known as the process id. This id is assigned
to the process on its creation, by the operating system.
• Process State: PCB will also contain the information on the current state
of the process. The state may be new, ready, running, waiting, halted, and
so on.
• Program Counter: The counter indicates the address of the next
instruction to be executed for this process.

• CPU registers: The registers vary in number and type, depending on the
computer architecture. They include accumulators, index registers, stack
pointers, and general-purpose registers, plus any condition-code
information. Along with the program counter, this state information must
Deepak .M
Assistant Professor
Baldwin Methodist College
be saved when an interrupt occurs, to allow the process to be continued
correctly afterward.
• CPU-scheduling information: This information includes a process
priority, pointers to scheduling queues, and any other scheduling
parameters.
• Memory-management information: This information may include such
items as the value of the base and limit registers and the page tables, or the
segment tables, depending on the memory system used by the operating
system.
• Accounting information: This information includes the amount of CPU
and real time used, time limits, account numbers, job or process numbers,
and so on.
• 1/0 status information: This information includes the list of I/O devices
allocated to the process, a list of open files, and so on.

14.(a) What is system call? Explain its types.


(b) Discuss the deadlock prevention and avoidance.
(a) In computing, a system call is the programmatic way in which a
computer program requests a service from the kernel of the operating
system it is executed on. A system call is a way for programs to interact
with the operating system.

Deepak .M
Assistant Professor
Baldwin Methodist College
1. Process Control: Process control is the system call that is used to direct the
processes. It performs the tasks of process creation, process termination, etc.
Functions of Process Control:

• End and Abort

• Loading and Execution of a process

• Creation and termination of a Process

• Wait and Signal Event

• Allocation of free memory


2. File Management: File management is a system call that is used to handle the
files.
Functions of File Management:

• Creation of a file

• Deletion of a file

• Opening and closing of a file

Deepak .M
Assistant Professor
Baldwin Methodist College
• Reading, writing, and repositioning
• Getting and setting file attributes
3.Device Management: Device management is a system call that is used to deal
with devices. It helps in device manipulation like reading from device buffers,
writing into device buffers, etc.
Functions of Device Management:

• Requesting and releasing devices

• Attaching and detaching devices logically

• Getting and setting device attributes


4. Information Maintenance: It handles information and information transfer
between OS and the user program.
Functions of Information Maintenance:

• Getting or setting time and date

• Getting process and device attributes


5. Communication: Communication is a system call that is used for inter
process communication.
Functions of Interprocess Communication:

• Creation and deletion of communications connections

• Sending and receiving messages

• Helping OS transfer status information

• Attaching or detaching remote devices


6. Protection: Protection provides a mechanism for controlling access to the
resources provided by a computer system.
Functions of Protection:

Deepak .M
Assistant Professor
Baldwin Methodist College
• System calls providing protection include setting the permissions and getting
the permissions.

• Manipulating the permission settings of resources such as files and disks.


• System calls specify whether particular users can or cannot be allowed access
to certain resources.
(b) Deadlock Prevention
The occurrence of a deadlock can be avoided by making sure one of the
conditions doesn't hold true. Here, we have expanded on this technique by
individually examining each of the four prerequisites.
1. Elimination of Mutual Exclusion" Condition: The mutual exclusion
condition must hold for non-sharable resources. That is, several processes cannot
simultaneously share a single resource. This condition is difficult to eliminate
because some resources, such as the tape drive and printer, are inherently non-
shareable. A good example of a sharable resource is Read-only files because if
several processes attempt to open a read-only file at the same time, then they can
be granted simultaneous access to the file.
2. Elimination of "Hold and Wait" Condition: There are two possibilities
for elimination of the hold and wait condition.

• The first alternative is that a process request be granted all of the


resources it needs at once, prior to execution.

• The second alternative is to disallow a process from requesting resources


whenever it has previously allocated resources.

Deepak .M
Assistant Professor
Baldwin Methodist College
This strategy requires that all of the resources a process will need must be
requested at once the system must grant resources on "all or none" basis. If the
complete set of resources needed by a process is not currently available, then the
process must wait until the complete set is available. While the process waits,
however, it may not hold any resources. Thus the "wait for" condition is denied,
and deadlocks simply cannot occur. This strategy can lead to serious waste of
resources.
3. Elimination of "No-pre-emption" Condition: The non-pre-emption
condition can be alleviated by forcing a process waiting for a resource that
cannot immediately be allocated to relinquish all of its currently held resources,
so that other processes may use them to finish. Suppose a system does allow
processes to hold resources while requesting additional resources. Consider what
happens when a request cannot be satisfied. A process holds resources a second
process may need in order to proceed while second process may hold the
resources needed by the first process. This is a deadlock. This strategy requires
that when a process that is holding some resources is denied a request for
additional resources. The process must release its held resources and, if
necessary, request them again together with additional resources. Implementation
of this strategy denies the "no-pre-emptive" condition effectively. When a
process release resources the process may lose all its work to that point. One
serious consequence of this strategy is the possibility of indefinite postponement
(starvation). A process might be held off indefinitely as it repeatedly requests and
releases the same resources.
4.Elimination of "Circular Wait" Condition: The last condition, the circular
wait, can be denied by imposing a total ordering on all of the resource types and
then forcing, all processes to request the resources in order (increasing or
decreasing). This strategy imposes a total ordering of all resources types, and to
require that each process requests resources in a numerical order (increasing or
decreasing) of enumeration. With this rule, the resource allocation graph can
never have a cycle. Example: Assume that R, resource is allocated to P, if next
time P, asks for R, R, that are lesser than R; then such request will not be granted.
Deadlock Avoidance
This method of solving the deadlock issue foresees deadlock before it actually
happens. This method uses an algorithm to evaluate the possibility of a deadlock
Deepak .M
Assistant Professor
Baldwin Methodist College
and take appropriate action. This method is different from deadlock prevention,
which guarantees that deadlock cannot exist by eliminating one of its
prerequisites.
In this method, the request for any resource will be granted only if the resulting
state of the system doesn't cause any deadlock in the system. This method checks
every step performed by the operating system. Any process continues its
execution until the system is in a safe state. Once the system enters into an unsafe
state, the operating system has to take a step back.
With the help of a deadlock-avoidance algorithm, you can dynamically assess
the resource-allocation state so that there can never be a circular-wait situation.
According to the simplest and useful approach, any process should declare the
maximum number of resources of each type it will need. The algorithms of
deadlock avoidance mainly examine the resource allocations so that there can
never be an occurrence of circular wait conditions.

• Safe State and Unsafe State


• Resource-Allocation Graph Algorithm
• Bankers Algorithm

(NOTE: The question is changed according to the syllabus)


15. Write the difference between Pre-emptive kernel and non-
preemptive kernel.

Pre-emptive kernel Non-pre-emptive kernel


• It is a form of the kernel that It enables a process executing in
permits a process to be removed kernel mode to be pre-empted.
or replaced when it is in kernel
mode.
• The response time is more The response time is nondeterministic
responsive and deterministic. and less responsive.
• It is more complex to design. It is less complex to design.

• These are more reliable and These are less secure and are less
useful in practical situations. useful.

Deepak .M
Assistant Professor
Baldwin Methodist College
• It doesn't need semaphores. Shared data usually needs semaphores.

• It is more useful for real-time It is less useful for real-time


programming. programming.

• It allows pre-emption in a It doesn't allow pre-emption in a


preemptive kernel. nonpre-emptive kernel.

• Allocates resources fairly and May not allocate resources fairly,


ensures that no process and a process can monopolize the
monopolizes the CPU. CPU if it doesn’t yield control.
• It cannot use non-reentrant It can use non-reentrant code.
code.
• Higher priority task becomes Higher priority task might have to
ready, currently running task wait for long time.
is suspended and moved to
ready queue.
• In this, higher priority task In this, each and every task are
that are ready to run is given explicitly given up CPU control.
CPU control.
• Some instances of the preemptive Some instances of the non- preemptive
kernel are IRIX, Linux, and kernel are Windows XP and 2000.
Solaris operating systems.

16. (a) Explain different fragmentation.


(b) Write a short note on demand paging.
(a)
Internal Fragmentation External Fragmentation

Deepak .M
Assistant Professor
Baldwin Methodist College
• Internal Fragmentation occurs External Fragmentation occurs when
when the memory blocks of the memory blocks of variable-size
are allocated to the processes
fixed-size are allocated to the
dynamically.
processes.

• This type of fragmentation When the memory space in the


system can easily satisfy the
mainly occurs when the fixed
requirement of the processes, but
size partition is assigned to a
this available memory space is
process whose size is less noncontiguous, So it can't be utilized
than the size of the partition further.
due to which the rest of the
space in the partition
becomes unusable.
• The difference between Unused memory spaces between
allocated memory and the non- contiguous memory
fragments that are too small to
memory required by a
serve a new process are called
process is called Internal External fragmentation.
fragmentation.
• It mainly refers to the unused It mainly refers to the unused blocks
of the memory that are not
space in the partition that
contiguous and hence are unable to
resides in the allocated
satisfy the requirements of the
region; as suggested by the process.
name.
• Best-fit block can be used to Compaction, segmentation, and
overcome the problem of paging can be used to overcome the
problem of External fragmentation.
Internal fragmentation.

• Paging suffers from Internal First-fit and best-fit suffer from


fragmentation. external fragmentation.

(b) Demand Paging


The process of loading the page into memory on demand (whenever page fault
occurs) is known as demand paging. The process includes the following steps:

Deepak .M
Assistant Professor
Baldwin Methodist College
• If the CPU tries to refer to a page that is currently not available in the
main memory, it generates an interrupt indicating a memory access fault.
• The OS puts the interrupted process in a blocking state. For the execution
to proceed the OS must bring the required page into the memory.
• The OS will search for the required page in the logical address space.
• The required page will be brought from logical address space to physical
address space. The page replacement algorithms are used for the
decisionmaking of replacing the page in physical address space.
• The page table will be updated accordingly.
• The signal will be sent to the CPU to continue the program execution and
it will place the process back into the ready state.

Advantages of Demand Paging

• Large virtual memory.


• More efficient use of memory.
• There is no limit on degree of multiprogramming.
Disadvantages of Demand Paging

Deepak .M
Assistant Professor
Baldwin Methodist College
• Number of tables and the amount of processor overhead for handling page
interrupts are greater than in the case of the simple paged management
techniques.

17. (a) Explain File Protection.


(b) Describe the different directory structures.
(a) File Protection
When information is stored in a computer system, we want to keep it safe from
physical damage (the issue of reliability) and improper access (the issue of
protection.
Many computers have systems programs that automatically (or through
computer-operator intervention) copy disk files to tape at regular intervals (once
per day or week or month) to maintain a copy should a file system be
accidentally destroyed.
File systems can be damaged by hardware problems (such as errors in reading or
writing), power surges or failures, head crashes, dirt, temperature extremes, and
vandalism. Files may be deleted accidentally. Bugs in the file-system software
can also cause file contents to be lost Protection can be provided in many ways.
Types of Access
The need to protect files is a direct result of the ability to access files. Systems
that do not permit access to the files of other users do not need protection Thus,
we could provide complete protection by prohibiting access. Protection
mechanisms provide controlled access by limiting the types of file access that
can be made.
Several different types of operations may be controlled:
• Read: Read from the file.
• Write: Write or rewrite the file.
• Execute: Load the file into memory and execute it.
• Append: Write new information at the end of the file.
• Delete: the file and free its space for possible reuse.
Deepak .M
Assistant Professor
Baldwin Methodist College
• List: List the name and attributes of the file.
Access control: The most common approach to the protection problem is to
make access dependent on the identity the user. Different users may need
different types of access to a file or directory. The most general scheme to
implement identity dependent access is to associate with each file and directory
an access- controllist (ACL)
Owner: The user who created the file is the owner.
Group: A set of users who are sharing the file and need similar access is a group,
or workgroup.
Universe: All other users in the system constitute the universe.

(b) 1.Single-Level Directory


Single level directory is simplest directory structure. In it all files are contained
in same directory which make it easy to support and understand.
A single level directory has a significant limitation, however, when the number
of files increases or when the system has more than one user. Since all the files
are in the same directory, they must have the unique name. If two users call their
dataset test, then the unique name rule violated.

Advantages of Single-Level Directory

• Since it is a single directory, so its implementation is very easy.


• If files are smaller in size, searching will faster.
• The operations like file creation, searching, deletion, updating are very
easy in such a directory structure.
Disadvantages of Single-Level Directory

Deepak .M
Assistant Professor
Baldwin Methodist College
• There may chance of name collision because two files cannot have the
same name.
• Searching will become time taking if directory will large.
• In this cannot group the same type of files together.
2.Two-Level Directory
As we have seen, a single level directory often leads to confusion of files names
among different users. the solution to this problem is to create a separate
directory for each user. In the two-level directory structure, each user has their
own user files directory (UFD). The UFDs has similar structures, but each lists
only the files of a single user. system's master file directory (MFD) is searches
whenever a new user id=s logged in. The MFD is indexed by username or
account number, and each entry points to the UFD for that user.

3.Tree-Structured Directory
Once we have seen a two-level directory as a tree of height 2, the natural
generalization is to extend the directory structure to a tree of arbitrary height.
This generalization allows the user to create their own subdirectories and to
organise on their files accordingly. A tree structure is the most common directory
structure. The tree has a root directory, and every files in the system have a
unique path.

Deepak .M
Assistant Professor
Baldwin Methodist College
Advantages of Tree-Structured Directory

• Very generalize, since full path name can be given.


• Very scalable, the probability of name collision is less.
• Searching becomes very easy, we can use both absolute path as well as
relative.
Disadvantages of Tree-Structured Directory
• Every file does not fit into the hierarchical model; files may be saved into
multiple directories.
• We cannot share files.
• It is inefficient, because accessing a file may go under multiple
directories.

Deepak .M
Assistant Professor
Baldwin Methodist College
4.Acyclic Graph Directory
An acyclic graph is a graph with no cycle and allows to share subdirectories and
files. The same file or directories may be in two different directories. It is a
natural generalization of the tree-structured directory.
It is used in the situation like when two programmers are working on a joint
project and they need to access files. The associated files are stored in a
subdirectory, separated them from other projects and les of other programmers
since they are working on a joint project so they want to the subdirectories into
their own directories. The common subdirectories should be shared. So here we
use Acyclic directories.

It is the point to note that shared file is not the same as copy file if any
programmer makes some changes in the subdirectory it will reflect in both
subdirectories.
Advantages of Acyclic Graph Directory

• We can share files.

Deepak .M
Assistant Professor
Baldwin Methodist College
• Searching is easy due to different-different paths. Disadvantages of
Acyclic Graph Directory
• We share the files via linking, in case of deleting it may create the
problem,
• If the link is soft link, then after deleting the file we left with a dangling
pointer.
In case of hard link, to delete a file we have to delete all the reference associated
with it.
5.General Graph Directory Structure
In general graph directory structure, cycles are allowed within a directory
structure where multiple directories can be derived from more than one parent
directory.The main problem with this kind of directory structure is to calculate
total size or space that have li been taken by the files and directories.

Advantages of General Graph Directory Structure

• It allows cycles.
• It is more flexible than other directories structure.
Disadvantages of General Graph Directory Structure

• It is costlier than others.


• It needs garbage collection.

18.Explain different disk scheduling algorithms with suitable


example.

Deepak .M
Assistant Professor
Baldwin Methodist College
1. First Come First Serve (FCFS)
In this algorithm, the requests are served in the order they come. Those who
come first are served first. This is the simplest algorithm.
Example, Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the
initial position of the Read-Write head is 60.

Seek Time= Distance Moved by the disk arm


= (140-70)+(140-50)+(125-50)+(125-30)+(30-25)+(160-25)=480
2. Shortest Seek Time First (SSTF)
In this algorithm, the shortest seek time is checked from the current position and
those requests which have the shortest seek time is served first. In simple words,
the closest request from the disk arm is served first.
Example, Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the
initial position of the Read-Write head is 60.

Seek Time = Distance Moved by the disk arm


=(60-50)+(50-30)+(30-25)+(70-25)+(125-70)+(140-
125)+(160125) =270

Deepak .M
Assistant Professor
Baldwin Methodist College
3. SCAN
In this algorithm, the disk arm moves in a particular direction till the end and
serves all the requests in its path, then it returns to the opposite direction and
moves till the last request is found in that direction and serves all of them.
Example, Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the
initial position of the Read-Write head is 60. And it is given that the disk arm
should move towards the larger value.

Seek Time = Distance Moved by the disk arm


=(170-60)+(170-25)=255
4.LOOK
In this algorithm, the disk arm moves in a particular direction till the last request
is found in that direction and serves all of them found in the path, and then
reverses its direction and serves the requests found in the path again up to the last
request found. The only difference between SCAN and LOOK is, it doesn't go to
the end it only moves up to which the request is found.
Example, Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the
initial position of the Read-Write head is 60. And it is given that the disk arm
should move towards the larger value.

Seek Time= Distance Moved by the disk arm


=(170-60)+(170-25)=235

Deepak .M
Assistant Professor
Baldwin Methodist College
5. C-SCAN
This algorithm is the same as the SCAN algorithm. The only difference between
SCAN and C-SCAN is, it moves in a particular direction till the last and serves
the requests in its path. Then, it returns in the opposite direction till the end and
doesn't serve the request while returning. Then, again reverses the direction and
serves the requests found in the path. It moves circularly.
Example, Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the
initial position of the Read-Write head is 60. And it is given that the disk arm
should move towards the larger value.

Seek Time= Distance Moved by the disk arm


=(170-60)+(170-0)+(50-0)=330
6.C-LOOK
This algorithm is also the same as the LOOK algorithm. The only difference
between LOOK and C-LOOK is, it moves in a particular direction till the last
request is found and serves the requests in its path. Then, it returns in the
opposite direction till the last request is found in that direction and doesn't serve
the request while returning. Then, again reverses the direction and serves the
requests found in the path. It also moves circularly.
Example, Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the
initial position of the Read-Write head is 60. And it is given that the disk arm
should move towards the larger value.

Deepak .M
Assistant Professor
Baldwin Methodist College
Seek Time =Distance Moved by the disk arm
= (160-60)+(160-25)+(50-25)=260

Deepak .M
Assistant Professor
Baldwin Methodist College

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy