Operating System B.U
Operating System B.U
A DEGREE EXAMINATION,
MARCH/APRIL-2023
BANGALORE UNIVERSITY
PART-A
I. Answer any Four questions each question carries 2 marks
• Process Management
• File Management
Deepak .M
Assistant Professor
Baldwin Methodist College
Preemptive Kernels: A preemptive kernel is a kernel form that permits a process
to replaced when it is in kernel mode. A preemptive kernel allows a process to be
preemptes while it is running in kernel mode kernel-mode process will run until
it exits kernel mode blocks, or voluntarily yields control of the CPU.
Non-Preemptive Kernels: A non-preemptive kernel does not allow a process
running is kernel mode to be pre-empted. A non preemptive kernel is essentially
free from race conditions on kernel data structures, as only one process is active
in the kernel at a time. Some instances of the preemptive kernel are IRIX, Linux,
and Solaris OS.
Common Solutions to the Critical Section Problem
Peterson's Solution
Mutex Locks
Synchronization Hardware
Semaphore Solution
Deepak .M
Assistant Professor
Baldwin Methodist College
2.Binary Semaphores:
The binary semaphores are like counting semaphores but their value is restricted
to 0 and 1 The wait operation only works when the semaphore is 1 and the signal
operation succeeds when semaphore is 0. It is sometimes easier to implement
binary semaphores than counting semaphores.
• Binary semaphores are mainly used for two purposes-To ensure mutual
exclusion and to implement the order in which the processes must execute.
10. Explain first fit, best fit and worst fit allocation of memory.
First-Fit Memory Allocation Technique:
This is a very basic strategy in which we start from the beginning and allot the
first hole, which is big enough as per the requirements of the process. The firstfit
strategy can also be implemented in a way where we can start our search for the
first-fit hole from the place we left off last time.
Advantages of First-Fit Memory Allocation
• It is fast in processing. As the processor allocates the nearest available
memory partition to the job, it is very fast in execution. Disadvantages
of First-Fit Memory Allocation • It wastes a lot of memory.
• The processor ignores if the size of partition allocated to the job is very
large as compared to the size of job or not. It just allocates the memory.
Best-Fit Memory Allocation Technique
This is a greedy strategy that aims to reduce any memory wasted because of
internal fragmentation in the case of static partitioning, and hence we allot that
Deepak .M
Assistant Professor
Baldwin Methodist College
hole to the process, which is the smallest hole that fits the requirements of the
process. Hence, we need to first sort the holes according to their sizes and pick
the best fit for the process without wasting memory.
Advantages of Best-Fit Memory Allocation Technique
• Memory Efficient. The operating system allocates the job minimum
possible space in the memory, making memory management very
efficient. To save memory from getting wasted, it is the best method.
Disadvantages of Best-Fit Memory Allocation Technique
• It is a Slow Process. Checking the whole memory for each job makes the
working of the operating system very slow. It takes a lot of time to
complete the work.
Worst-Fit Memory Allocation Technique
This strategy is the opposite of the Best-Fit strategy. We sort the holes
according to their sizes and choose the largest hole to be allotted to the
incoming process. The idea behind this allocation is that as the process is
allotted a large hole, it will have a lot of space left behind as internal
fragmentation. Hence, this will create a hole that will be large enough to
accommodate a few other processes. Advantages of Worst-Fit Allocation
• Since this process chooses the largest hole/partition, therefore there will be
large internal fragmentation. Now, this internal fragmentation will be quite
big so that other small processes can also be placed in that leftover
partition
Disadvantages of Worst-Fit Allocation
• This type of file access method is slow if the file record to be accessed
next is not present next to the current record.
• Inserting a new record may require moving a large proportion of the file.
Direct Access Method
This access method is also called real-time access where the records can be read
irrespective of their sequence. This means they can be accessed as a file that is
accessed from a disk where each record carries a sequence number. For example,
block 40 can be accessed first followed by block 10 and then block 30, and so on.
This eliminates the need for sequential read or write operations.
A major example of this type of file access is the database where the
corresponding block of the query is accessed directly for instant response. This
type of access saves a lot of time when a large amount of data is present. In such
cases, hash functions and index methods are used to search for a particular block.
Advantages of Direct Access Method
• The files can be immediately accessed decreasing the average access time.
• In the direct access method, in order to access a block, there is no need of
traversing all the blocks present before it.
Indexed Access Method
This method is typically an advancement in the direct access method which is
the consideration of index. A particular record is accessed by browsing through
Deepak .M
Assistant Professor
Baldwin Methodist College
the index and the file is accessed directly with the use of a pointers or addresses
present in the index as shown below.
To understand the concept, consider a book store where the database contains a
12-digit ISBN and a four-digit product price. If the disk can carry 2048 (2kb) of
bytes per block, then 128 records of 16 bytes (12 for ISBN and 4 for price) can
be stored in a single block. This results in a file carrying 128000 records to be
reduced to 1000 blocks to be considered in the index each entry carrying 10
digits. To find the price of a book binary search can be performed over the index
with which the block carrying that book can be identified.
A drawback of this method is, it is considered ineffective in the case of a larger
database with very large files which results in making the index too large.
Indexed Sequential Access Method
To overcome the drawback associated with indexed access, this method is used
where an index of an Index is created. Primary index points to the secondary
index and the secondary index points to the actual data items. An example of
such a method is ISAM (Indexed-Sequential Access Method) of IBM which
carries two types of indexes. They are a master index and secondary index.
Master index carries pointers to the secondary index whereas, the secondary
carries blocks that point directly to the disk blocks. Two binary searches are
performed to access a data item. The first one is performed on a master index and
the second one on the secondary index. This type of method is a two direct
access read method.
➤ The linear length of tracks near the outer edge of the disk is much longer than
for those tracks located near the center, and therefore it is possible to squeeze
many more sectors onto outer tracks than onto inner ones.
Deepak .M
Assistant Professor
Baldwin Methodist College
➤ All disks have some bad sectors, and therefore disks maintain a few spare
sectors that can be used in place of the bad ones. The mapping of spare sectors to
bad sectors in managed internally to the disk controller.
➤ Modern hard drives can have thousands of cylinders, and hundreds of sectors
per track on their outermost tracks. These numbers exceed the range of HSC
numbers for many (older) operating systems, and therefore disks can be
configured for any convenient combination of HSC values that falls within the
total number of sectors physically on the drive.
• There is a limit to how closely packed individual bits can be placed on a
physical media, but that limit is growing increasingly more packed as
technological advances are made.
• Modern disks pack many more sectors into outer cylinders than inner
ones, using one of two approaches:
➤ With Constant Linear Velocity, CLV, the density of bits is uniform from
cylinder to cylinder. Because there are more sectors in outer cylinders, the disk
spins slower when reading those cylinders, causing the rate of bits passing under
the read-write head to remain constant. This is the approach used by modern CDs
and DVDs.
III. Answer any Four questions each question carries 5 marks 13. (a)
• Maximum use of devices and system thus gives more output from all the
resources.
• Error free.
Deepak .M
Assistant Professor
Baldwin Methodist College
• They are very less prone to switching tasks.
Time Sharing Operating System
Time-sharing is a logical extension of multiprogramming. A time-shared
operating system allows many users to share the computer simultaneously. The
system switches rapidly form one user to the next and hence the user is given an
impression that the entire computer system is dedicated to his use, though t is
being shared among many users.
• Multiple online users can use the same computer at the same time.
• End-users feel that they monopolize the computer system.
• It does not need longer have to wait for the last task to end to get processor.
Deepak .M
Assistant Professor
Baldwin Methodist College
• Reduces CPU idle time
• An issue with the security and integrity of user programs and data
Deepak .M
Assistant Professor
Baldwin Methodist College
PCB contains many pieces of information associated with a specific process,
including these:
• Process ID: Every process that is created in the system should be given a
unique identification number known as the process id. This id is assigned
to the process on its creation, by the operating system.
• Process State: PCB will also contain the information on the current state
of the process. The state may be new, ready, running, waiting, halted, and
so on.
• Program Counter: The counter indicates the address of the next
instruction to be executed for this process.
• CPU registers: The registers vary in number and type, depending on the
computer architecture. They include accumulators, index registers, stack
pointers, and general-purpose registers, plus any condition-code
information. Along with the program counter, this state information must
Deepak .M
Assistant Professor
Baldwin Methodist College
be saved when an interrupt occurs, to allow the process to be continued
correctly afterward.
• CPU-scheduling information: This information includes a process
priority, pointers to scheduling queues, and any other scheduling
parameters.
• Memory-management information: This information may include such
items as the value of the base and limit registers and the page tables, or the
segment tables, depending on the memory system used by the operating
system.
• Accounting information: This information includes the amount of CPU
and real time used, time limits, account numbers, job or process numbers,
and so on.
• 1/0 status information: This information includes the list of I/O devices
allocated to the process, a list of open files, and so on.
Deepak .M
Assistant Professor
Baldwin Methodist College
1. Process Control: Process control is the system call that is used to direct the
processes. It performs the tasks of process creation, process termination, etc.
Functions of Process Control:
• Creation of a file
• Deletion of a file
Deepak .M
Assistant Professor
Baldwin Methodist College
• Reading, writing, and repositioning
• Getting and setting file attributes
3.Device Management: Device management is a system call that is used to deal
with devices. It helps in device manipulation like reading from device buffers,
writing into device buffers, etc.
Functions of Device Management:
Deepak .M
Assistant Professor
Baldwin Methodist College
• System calls providing protection include setting the permissions and getting
the permissions.
Deepak .M
Assistant Professor
Baldwin Methodist College
This strategy requires that all of the resources a process will need must be
requested at once the system must grant resources on "all or none" basis. If the
complete set of resources needed by a process is not currently available, then the
process must wait until the complete set is available. While the process waits,
however, it may not hold any resources. Thus the "wait for" condition is denied,
and deadlocks simply cannot occur. This strategy can lead to serious waste of
resources.
3. Elimination of "No-pre-emption" Condition: The non-pre-emption
condition can be alleviated by forcing a process waiting for a resource that
cannot immediately be allocated to relinquish all of its currently held resources,
so that other processes may use them to finish. Suppose a system does allow
processes to hold resources while requesting additional resources. Consider what
happens when a request cannot be satisfied. A process holds resources a second
process may need in order to proceed while second process may hold the
resources needed by the first process. This is a deadlock. This strategy requires
that when a process that is holding some resources is denied a request for
additional resources. The process must release its held resources and, if
necessary, request them again together with additional resources. Implementation
of this strategy denies the "no-pre-emptive" condition effectively. When a
process release resources the process may lose all its work to that point. One
serious consequence of this strategy is the possibility of indefinite postponement
(starvation). A process might be held off indefinitely as it repeatedly requests and
releases the same resources.
4.Elimination of "Circular Wait" Condition: The last condition, the circular
wait, can be denied by imposing a total ordering on all of the resource types and
then forcing, all processes to request the resources in order (increasing or
decreasing). This strategy imposes a total ordering of all resources types, and to
require that each process requests resources in a numerical order (increasing or
decreasing) of enumeration. With this rule, the resource allocation graph can
never have a cycle. Example: Assume that R, resource is allocated to P, if next
time P, asks for R, R, that are lesser than R; then such request will not be granted.
Deadlock Avoidance
This method of solving the deadlock issue foresees deadlock before it actually
happens. This method uses an algorithm to evaluate the possibility of a deadlock
Deepak .M
Assistant Professor
Baldwin Methodist College
and take appropriate action. This method is different from deadlock prevention,
which guarantees that deadlock cannot exist by eliminating one of its
prerequisites.
In this method, the request for any resource will be granted only if the resulting
state of the system doesn't cause any deadlock in the system. This method checks
every step performed by the operating system. Any process continues its
execution until the system is in a safe state. Once the system enters into an unsafe
state, the operating system has to take a step back.
With the help of a deadlock-avoidance algorithm, you can dynamically assess
the resource-allocation state so that there can never be a circular-wait situation.
According to the simplest and useful approach, any process should declare the
maximum number of resources of each type it will need. The algorithms of
deadlock avoidance mainly examine the resource allocations so that there can
never be an occurrence of circular wait conditions.
• These are more reliable and These are less secure and are less
useful in practical situations. useful.
Deepak .M
Assistant Professor
Baldwin Methodist College
• It doesn't need semaphores. Shared data usually needs semaphores.
Deepak .M
Assistant Professor
Baldwin Methodist College
• Internal Fragmentation occurs External Fragmentation occurs when
when the memory blocks of the memory blocks of variable-size
are allocated to the processes
fixed-size are allocated to the
dynamically.
processes.
Deepak .M
Assistant Professor
Baldwin Methodist College
• If the CPU tries to refer to a page that is currently not available in the
main memory, it generates an interrupt indicating a memory access fault.
• The OS puts the interrupted process in a blocking state. For the execution
to proceed the OS must bring the required page into the memory.
• The OS will search for the required page in the logical address space.
• The required page will be brought from logical address space to physical
address space. The page replacement algorithms are used for the
decisionmaking of replacing the page in physical address space.
• The page table will be updated accordingly.
• The signal will be sent to the CPU to continue the program execution and
it will place the process back into the ready state.
Deepak .M
Assistant Professor
Baldwin Methodist College
• Number of tables and the amount of processor overhead for handling page
interrupts are greater than in the case of the simple paged management
techniques.
Deepak .M
Assistant Professor
Baldwin Methodist College
• There may chance of name collision because two files cannot have the
same name.
• Searching will become time taking if directory will large.
• In this cannot group the same type of files together.
2.Two-Level Directory
As we have seen, a single level directory often leads to confusion of files names
among different users. the solution to this problem is to create a separate
directory for each user. In the two-level directory structure, each user has their
own user files directory (UFD). The UFDs has similar structures, but each lists
only the files of a single user. system's master file directory (MFD) is searches
whenever a new user id=s logged in. The MFD is indexed by username or
account number, and each entry points to the UFD for that user.
3.Tree-Structured Directory
Once we have seen a two-level directory as a tree of height 2, the natural
generalization is to extend the directory structure to a tree of arbitrary height.
This generalization allows the user to create their own subdirectories and to
organise on their files accordingly. A tree structure is the most common directory
structure. The tree has a root directory, and every files in the system have a
unique path.
Deepak .M
Assistant Professor
Baldwin Methodist College
Advantages of Tree-Structured Directory
Deepak .M
Assistant Professor
Baldwin Methodist College
4.Acyclic Graph Directory
An acyclic graph is a graph with no cycle and allows to share subdirectories and
files. The same file or directories may be in two different directories. It is a
natural generalization of the tree-structured directory.
It is used in the situation like when two programmers are working on a joint
project and they need to access files. The associated files are stored in a
subdirectory, separated them from other projects and les of other programmers
since they are working on a joint project so they want to the subdirectories into
their own directories. The common subdirectories should be shared. So here we
use Acyclic directories.
It is the point to note that shared file is not the same as copy file if any
programmer makes some changes in the subdirectory it will reflect in both
subdirectories.
Advantages of Acyclic Graph Directory
Deepak .M
Assistant Professor
Baldwin Methodist College
• Searching is easy due to different-different paths. Disadvantages of
Acyclic Graph Directory
• We share the files via linking, in case of deleting it may create the
problem,
• If the link is soft link, then after deleting the file we left with a dangling
pointer.
In case of hard link, to delete a file we have to delete all the reference associated
with it.
5.General Graph Directory Structure
In general graph directory structure, cycles are allowed within a directory
structure where multiple directories can be derived from more than one parent
directory.The main problem with this kind of directory structure is to calculate
total size or space that have li been taken by the files and directories.
• It allows cycles.
• It is more flexible than other directories structure.
Disadvantages of General Graph Directory Structure
Deepak .M
Assistant Professor
Baldwin Methodist College
1. First Come First Serve (FCFS)
In this algorithm, the requests are served in the order they come. Those who
come first are served first. This is the simplest algorithm.
Example, Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the
initial position of the Read-Write head is 60.
Deepak .M
Assistant Professor
Baldwin Methodist College
3. SCAN
In this algorithm, the disk arm moves in a particular direction till the end and
serves all the requests in its path, then it returns to the opposite direction and
moves till the last request is found in that direction and serves all of them.
Example, Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the
initial position of the Read-Write head is 60. And it is given that the disk arm
should move towards the larger value.
Deepak .M
Assistant Professor
Baldwin Methodist College
5. C-SCAN
This algorithm is the same as the SCAN algorithm. The only difference between
SCAN and C-SCAN is, it moves in a particular direction till the last and serves
the requests in its path. Then, it returns in the opposite direction till the end and
doesn't serve the request while returning. Then, again reverses the direction and
serves the requests found in the path. It moves circularly.
Example, Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the
initial position of the Read-Write head is 60. And it is given that the disk arm
should move towards the larger value.
Deepak .M
Assistant Professor
Baldwin Methodist College
Seek Time =Distance Moved by the disk arm
= (160-60)+(160-25)+(50-25)=260
Deepak .M
Assistant Professor
Baldwin Methodist College