0% found this document useful (0 votes)
59 views

Top 100+ Operating System Interview Questions (20 2

Uploaded by

gunsupheythere
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views

Top 100+ Operating System Interview Questions (20 2

Uploaded by

gunsupheythere
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

GEEKSFORGEEKS

Operating System Interview


Questions
Operating System (OS) is Software that facilitates
computer software to communicate and operate
computer hardware with the computer software.
An operating system acts as a GUI between the
User and the computer System. In Other words,
an OS acts as an intermediary between the user
and the computer hardware, managing resources
such as memory, processing power, and
input/output operations. Here some examples of
popular operating systems include Windows,
MacOS, Linux, Android etc.

In this article, we provide you with the top 100+


OS interview questions with answers that cover
everything from the basics of OS architecture to
advanced operating systems concepts such as file
systems, scheduling algorithms, and
multithreading. Whether you are a fresher or an
experienced IT professional, this article gives you
all the confidence you need to ace your next OS
interview.

Operating System Interview Questions

Table of Content

Basics OS Interview Questions


Intermediate OS Interview Questions
Advanced OS Interview Questions

Basics OS Interview Questions


1. What is a process and process table?

A process is an instance of a program in


execution. For example, a Web Browser is a
process, and a shell (or command prompt) is a
process. The operating system is responsible for
managing all the processes that are running on a
computer and allocates each process a certain
amount of time to use the processor. In addition,
the operating system also allocates various other
resources that processes will need, such as
computer memory or disks. To keep track of the
state of all the processes, the operating system
maintains a table known as the process table.
Inside this table, every process is listed along with
the resources the process is using and the current
state of the process.

2. What are the different states of the


process?

Processes can be in one of three states: running,


ready, or waiting. The running state means that
the process has all the resources it needs for
execution and it has been given permission by the
operating system to use the processor. Only one
process can be in the running state at any given
time. The remaining processes are either in a
waiting state (i.e., waiting for some external event
to occur such as user input or disk access) or a
ready state (i.e., waiting for permission to use the
processor). In a real operating system, the waiting
and ready states are implemented as queues that
hold the processes in these states.

3. What is a Thread?

A thread is a single sequence stream within a


process. Because threads have some of the
properties of processes, they are sometimes
called lightweight processes. Threads are a
popular way to improve the application through
parallelism. For example, in a browser, multiple
tabs can be different threads. MS Word uses
multiple threads, one thread to format the text,
another thread to process inputs, etc.

4. What are the differences between


process and thread?

Process is program under action and thread is the


smallest segment of instructions (segment of a
process) that can be handled independently by a
scheduler. Threads are lightweight processes that
share the same address space including the code
section, data section and operating system
resources such as the open files and signals.
However, each thread has its own program
counter (PC), register set and stack space
allowing them to the execute independently within
the same process context. Unlike processes,
threads are not fully independent entities and can
communicate and synchronize more efficiently
making them suitable for the concurrent and
parallel execution in the multi-threaded
environment.

5. What are the benefits of


multithreaded programming?

It makes the system more responsive and enables


resource sharing. It leads to the use of
multiprocess architecture. It is more economical
and preferred.

6. What is Thrashing?

Thrashing is a situation when the performance of


a computer degrades or collapses. Thrashing
occurs when a system spends more time
processing page faults than executing
transactions. While processing page faults is
necessary in order to appreciate the benefits of
virtual memory, thrashing has a negative effect on
the system. As the page fault rate increases, more
transactions need processing from the paging
device. The queue at the paging device increases,
resulting in increased service time for a page fault.

7. What is Buffer?

A buffer is a memory area that stores data being


transferred between two devices or between a
device and an application.

8. What is virtual memory?

Virtual memory creates an illusion that each user


has one or more contiguous address spaces,
each beginning at address zero. The sizes of such
virtual address spaces are generally very
high. The idea of virtual memory is to use disk
space to extend the RAM. Running processes
don’t need to care whether the memory is from
RAM or disk. The illusion of such a large amount
of memory is created by subdividing the virtual
memory into smaller pieces, which can be loaded
into physical memory whenever they are needed
by a process.

9. Explain the main purpose of an


operating system?

An operating system acts as an intermediary


between the user of a computer and computer
hardware. The purpose of an operating system is
to provide an environment in which a user can
execute programs conveniently and efficiently.

An operating system is a software that manages


computer hardware. The hardware must provide
appropriate mechanisms to ensure the correct
operation of the computer system and to prevent
user programs from interfering with the proper
operation of the system.

10. What is demand paging?

The process of loading the page into memory on


demand (whenever a page fault occurs) is known
as demand paging.

11. What is a kernel?

A kernel is the central component of an operating


system that manages the operations of computers
and hardware. It basically manages operations of
memory and CPU time. It is a core component of
an operating system. Kernel acts as a bridge
between applications and data processing
performed at the hardware level using inter-
process communication and system calls.

12. What are the different scheduling


algorithms?

First-Come, First-Served (FCFS) Scheduling.


Shortest-Job-Next (SJN) Scheduling.
Priority Scheduling.
Shortest Remaining Time.
Round Robin(RR) Scheduling.
Multiple-Level Queues Scheduling.

13. Describe the objective of multi-


programming.

Multi-programming increases CPU utilization by


organizing jobs (code and data) so that the CPU
always has one to execute. The main objective of
multi-programming is to keep multiple jobs in the
main memory. If one job gets occupied with IO,
the CPU can be assigned to other jobs.

14. What is the time-sharing system?

Time-sharing is a logical extension of


multiprogramming. The CPU performs many tasks
by switches that are so frequent that the user can
interact with each program while it is running. A
time-shared operating system allows multiple
users to share computers simultaneously.

15. What problem we face in computer


system without OS?

Poor resource management


Lack of User Interface
No File System
No Networking
Error handling is big issue
etc.

16. Give some benefits of multithreaded


programming?

A thread is also known as a lightweight process.


The idea is to achieve parallelism by dividing a
process into multiple threads. Threads within the
same process run in shared memory space,

17. Briefly explain FCFS.

FCFS stands for First Come First served. In the


FCFS scheduling algorithm, the job that arrived
first in the ready queue is allocated to the CPU
and then the job that came second and so on.
FCFS is a non-preemptive scheduling algorithm
as a process that holds the CPU until it either
terminates or performs I/O. Thus, if a longer job
has been assigned to the CPU then many shorter
jobs after it will have to wait.

18. What is the RR scheduling


algorithm?

A round-robin scheduling algorithm is used to


schedule the process fairly for each job in a time
slot or quantum and interrupting the job if it is not
completed by then the job comes after the other
job which is arrived in the quantum time makes
these scheduling fairly.

Round-robin is cyclic in nature, so starvation


doesn’t occur
Round-robin is a variant of first-come, first-
served scheduling
No priority or special importance is given to any
process or task
RR scheduling is also known as Time slicing
scheduling

19. Enumerate the different RAID


levels?

A redundant array of independent disks is a set of


several physical disk drives that the operating
system sees as a single logical unit. It played a
significant role in narrowing the gap between
increasingly fast processors and slow disk drives.
RAID has different levels:

Level-0
Level-1
Level-2
Level-3
Level-4
Level-5
Level-6

20. What is Banker’s algorithm?

The banker’s algorithm is a resource allocation


and deadlock avoidance algorithm that tests for
safety by simulating the allocation for the
predetermined maximum possible amounts of all
resources, then makes an “s-state” check to test
for possible activities, before deciding whether
allocation should be allowed to continue.

21. State the main difference between


logical and physical address space?

LOGICAL PHYSICAL
Parameter
ADDRESS ADDRESS

generated by location in a
Basic
the CPU. memory unit.

Physical
Logical
Address is a
Address
set of all
Space is a set
physical
of all logical
Address addresses
addresses
Space mapped to
generated by
the
the CPU in
corresponding
reference to a
logical
program.
addresses.

Users can
Users can
never view
view the
Visibility the physical
logical address
address of the
of a program.
program.

generated by Computed by
Generation
the CPU. MMU.

The user can The user can


use the logical indirectly
address to access
Access
access the physical
physical addresses but
address. not directly.

22. How does dynamic loading aid in


better memory space utilization?

With dynamic loading, a routine is not loaded until


it is called. This method is especially useful when
large amounts of code are needed in order to
handle infrequently occurring cases such as error
routines.

23. What are overlays?

The concept of overlays is that whenever a


process is running it will not use the complete
program at the same time, it will use only some
part of it. Then overlay concept says that whatever
part you required, you load it and once the part is
done, then you just unload it, which means just
pull it back and get the new part you required and
run it. Formally, “The process of transferring a
block of program code or other data into internal
memory, replacing what is already stored”.

24. What is fragmentation?

Processes are stored and removed from memory,


which makes free memory space, which is too
little to even consider utilizing by different
processes. Suppose, that process is not ready to
dispense to memory blocks since its little size and
memory hinder consistently staying unused is
called fragmentation. This kind of issue occurs
during a dynamic memory allotment framework
when free blocks are small, so it can’t satisfy any
request.

25. What is the basic function of


paging?

Paging is a method or technique which is used for


non-contiguous memory allocation. It is a fixed-
size partitioning theme (scheme). In paging, both
main memory and secondary memory are divided
into equal fixed-size partitions. The partitions of
the secondary memory area unit and the main
memory area unit are known as pages and frames
respectively.

Paging is a memory management method


accustomed fetch processes from the secondary
memory into the main memory in the form of
pages. in paging, each process is split into parts
wherever the size of every part is the same as the
page size. The size of the last half could also be
but the page size. The pages of the process area
unit hold on within the frames of main memory
relying upon their accessibility

26. How does swapping result in better


memory management?

Swapping is a simple memory/process


management technique used by the operating
system(os) to increase the utilization of the
processor by moving some blocked processes
from the main memory to the secondary
memory thus forming a queue of the temporarily
suspended processes and the execution
continues with the newly arrived process. During
regular intervals that are set by the operating
system, processes can be copied from the main
memory to a backing store and then copied back
later. Swapping allows more processes to be run
that can fit into memory at one time

27. Write a name of classic


synchronization problems?

Bounded-buffer
Readers-writers
Dining philosophers
Sleeping barber

28. What is the Direct Access Method?

The direct Access method is based on a disk


model of a file, such that it is viewed as a
numbered sequence of blocks or records. It allows
arbitrary blocks to be read or written. Direct
access is advantageous when accessing large
amounts of information. Direct memory access
(DMA) is a method that allows an input/output
(I/O) device to send or receive data directly to or
from the main memory, bypassing the CPU to
speed up memory operations. The process is
managed by a chip known as a DMA controller
(DMAC).

29. When does thrashing occur?

Thrashing occurs when processes on the system


frequently access pages, not available memory.

30. What is the best page size when


designing an operating system?

The best paging size varies from system to


system, so there is no single best when it comes
to page size. There are different factors to
consider in order to come up with a suitable page
size, such as page table, paging time, and its
effect on the overall efficiency of the operating
system.

31. What is multitasking?

Multitasking is a logical extension of a


multiprogramming system that supports multiple
programs to run concurrently. In multitasking,
more than one task is executed at the same time.
In this technique, the multiple tasks, also known
as processes, share common processing
resources such as a CPU.

32. What is caching?

The cache is a smaller and faster memory that


stores copies of the data from frequently used
main memory locations. There are various
different independent caches in a CPU, which
store instructions and data. Cache memory is
used to reduce the average time to access data
from the Main memory.

33. What is spooling?

Spooling refers to simultaneous peripheral


operations online, spooling refers to putting jobs in
a buffer, a special area in memory, or on a disk
where a device can access them when it is ready.
Spooling is useful because devices access data at
different rates.

34. What is the functionality of an


Assembler?

The Assembler is used to translate the program


written in Assembly language into machine code.
The source program is an input of an assembler
that contains assembly language instructions. The
output generated by the assembler is the object
code or machine code understandable by the
computer.

35. What are interrupts?

The interrupts are a signal emitted by hardware or


software when a process or an event needs
immediate attention. It alerts the processor to a
high-priority process requiring interruption of the
current working process. In I/O devices one of the
bus control lines is dedicated to this purpose and
is called the Interrupt Service Routine (ISR).

36. What is GUI?

GUI is short for Graphical User Interface. It


provides users with an interface wherein actions
can be performed by interacting with icons and
graphical symbols.

37. What is preemptive multitasking?

Preemptive multitasking is a type of multitasking


that allows computer programs to share operating
systems (OS) and underlying hardware resources.
It divides the overall operating and computing time
between processes, and the switching of
resources between different processes occurs
through predefined criteria.

38. What is a pipe and when is it used?

A Pipe is a technique used for inter-process


communication. A pipe is a mechanism by which
the output of one process is directed into the input
of another process. Thus it provides a one-way
flow of data between two related processes.

39. What are the advantages of


semaphores?

They are machine-independent.


Easy to implement.
Correctness is easy to determine.
Can have many different critical sections with
different semaphores.
Semaphores acquire many resources
simultaneously.
No waste of resources due to busy waiting.

40. What is a bootstrap program in the


OS?

Bootstrapping is the process of loading a set of


instructions when a computer is first turned on or
booted. During the startup process, diagnostic
tests are performed, such as the power-on self-
test (POST), which set or checks configurations
for devices and implements routine testing for the
connection of peripherals, hardware, and external
memory devices. The bootloader or bootstrap
program is then loaded to initialize the OS.

41. What is IPC?


Inter-process communication (IPC) is a
mechanism that allows processes to communicate
with each other and synchronize their actions. The
communication between these processes can be
seen as a method of cooperation between them.

42. What are the different IPC


mechanisms?

These are the methods in IPC:

Pipes (Same Process): This allows a flow of


data in one direction only. Analogous to simplex
systems (Keyboard). Data from the output is
usually buffered until the input process receives
it which must have a common origin.
Named Pipes (Different Processes): This is a
pipe with a specific name it can be used in
processes that don’t have a shared common
process origin. E.g. FIFO where the details
written to a pipe are first named.
Message Queuing: This allows messages to
be passed between processes using either a
single queue or several message queues. This
is managed by the system kernel these
messages are coordinated using an API.
Semaphores: This is used in solving problems
associated with synchronization and avoiding
race conditions. These are integer values that
are greater than or equal to 0.
Shared Memory: This allows the interchange of
data through a defined area of memory.
Semaphore values have to be obtained before
data can get access to shared memory.
Sockets: This method is mostly used to
communicate over a network between a client
and a server. It allows for a standard connection
which is computer and OS independent

43. What is the difference between


preemptive and non-preemptive
scheduling?

In preemptive scheduling, the CPU is allocated


to the processes for a limited time whereas, in
Non-preemptive scheduling, the CPU is
allocated to the process till it terminates or
switches to waiting for the state.
The executing process in preemptive
scheduling is interrupted in the middle of
execution when a higher priority one comes
whereas, the executing process in non-
preemptive scheduling is not interrupted in the
middle of execution and waits till its execution.
In Preemptive Scheduling, there is the overhead
of switching the process from the ready state to
the running state, vice-verse, and maintaining
the ready queue. Whereas the case of non-
preemptive scheduling has no overhead of
switching the process from running state to
ready state.
In preemptive scheduling, if a high-priority
process frequently arrives in the ready queue
then the process with low priority has to wait for
a long, and it may have to starve. On the other
hand, in non-preemptive scheduling, if CPU is
allocated to the process having a larger burst
time then the processes with a small burst time
may have to starve.
Preemptive scheduling attains flexibility by
allowing the critical processes to access the
CPU as they arrive in the ready queue, no
matter what process is executing currently. Non-
preemptive scheduling is called rigid as even if
a critical process enters the ready queue the
process running CPU is not disturbed.
Preemptive Scheduling has to maintain the
integrity of shared data that’s why it is cost
associative which is not the case with Non-
preemptive Scheduling.

44. What is the zombie process?

A process that has finished the execution but still


has an entry in the process table to report to its
parent process is known as a zombie process. A
child process always first becomes a zombie
before being removed from the process table. The
parent process reads the exit status of the child
process which reaps off the child process entry
from the process table.

45. What are orphan processes?

A process whose parent process no more exists


i.e. either finished or terminated without waiting for
its child process to terminate is called an orphan
process.

46. What are starvation and aging in


OS?

Starvation: Starvation is a resource management


problem where a process does not get the
resources it needs for a long time because the
resources are being allocated to other processes.

Aging: Aging is a technique to avoid starvation in


a scheduling system. It works by adding an aging
factor to the priority of each request. The aging
factor must increase the priority of the request as
time passes and must ensure that a request will
eventually be the highest priority request

47. Write about monolithic kernel?

Apart from microkernel, Monolithic Kernel is


another classification of Kernel. Like microkernel,
this one also manages system resources between
application and hardware, but user services and
kernel services are implemented under the same
address space. It increases the size of the kernel,
thus increasing the size of an operating system as
well. This kernel provides CPU scheduling,
memory management, file management, and
other operating system functions through system
calls. As both services are implemented under the
same address space, this makes operating
system execution faster.

48. What is Context Switching?

Switching of CPU to another process means


saving the state of the old process and loading the
saved state for the new process. In Context
Switching the process is stored in the Process
Control Block to serve the new process so that the
old process can be resumed from the same part it
was left.

49. What is the difference between the


Operating system and kernel?

Operating System Kernel

The kernel is system


software that is part
Operating System is
of the
system software.
Microkerneloperating
system.

Operating System The kernel provides


provides an interface an interface b/w the
b/w the user and the application and
hardware. hardware.

Its main purpose is


memory
It also provides management, disk
protection and management,
security. process
management and
task management.

All systems need a


real-time operating All operating system
system, and a needs a kernel to
microkernel system run.
to run.

Type of operating
system includes
Type of kernel
single and multiuser
includes Monolithic
OS, multiprocessor
and Microkernel.
OS, real-time OS,
Distributed OS.

It is the first program


It is the first program
to load when the
to load when the
operating system
computer boots up.
loads

50. What is the difference between


process and thread?

S.NO Process Thread

Process means Thread means a


1. any program is segment of a
in execution. process.

The process is
Thread is more
less efficient in
2. efficient in terms of
terms of
communication.
communication.

The process is Threads share


3.
isolated. memory.

The process is
Thread is called
called
4. lightweight
heavyweight
process.
the process.

Process
Thread switching
switching uses,
does not require to
another
call an operating
5. process
system and cause
interface in
an interrupt to the
operating
kernel.
system.

If one process
The second, thread
is blocked then
in the same task
it will not affect
6. could not run, while
the execution
one server thread
of other
is blocked.
process

The process
Thread has
has its own
Parents’ PCB, its
Process
own Thread
7. Control Block,
Control Block and
Stack and
Stack and common
Address
Address space.
Space.

51. What is PCB?

the process control block (PCB) is a block that is


used to track the process’s execution status. A
process control block (PCB) contains information
about the process, i.e. registers, quantum, priority,
etc. The process table is an array of PCBs, that
means logically contains a PCB for all of the
current processes in the system.

52. When is a system in a safe state?

The set of dispatchable processes is in a safe


state if there exists at least one temporal order in
which all processes can be run to completion
without resulting in a deadlock.

53. What is Cycle Stealing?

cycle stealing is a method of accessing computer


memory (RAM) or bus without interfering with the
CPU. It is similar to direct memory access (DMA)
for allowing I/O controllers to read or write RAM
without CPU intervention.

54. What are a Trap and Trapdoor?

A trap is a software interrupt, usually the result of


an error condition, and is also a non-maskable
interrupt and has the highest priority Trapdoor is a
secret undocumented entry point into a program
used to grant access without normal methods
of access authentication.

55. Write a difference between process


and program?

S.NO Program Process

Program
contains a set of Process is an
instructions instance of an
1.
designed to executing
complete a program.
specific task.

Process is an
Program is a
active entity as it
passive entity as
is created during
2. it resides in the
execution and
secondary
loaded into the
memory.
main memory.

The program Process exists for


exists in a single a limited span of
place and time as it gets
3.
continues to terminated after
exist until it is the completion of
deleted. a task.

A program is a The process is a


4.
static entity. dynamic entity.

Program does
Process has a
not have any
high resource
resource
requirement, it
requirement, it
5. needs resources
only requires
like CPU, memory
memory space
address, and I/O
for storing the
during its lifetime.
instructions.

The program The process has


does not have its own control
6. any control block called
block. Process Control
Block.

56. What is a dispatcher?

The dispatcher is the module that gives process


control over the CPU after it has been selected by
the short-term scheduler. This function involves
the following:

Switching context
Switching to user mode
Jumping to the proper location in the user
program to restart that program

57. Define the term dispatch latency?

Dispatch latency can be described as the amount


of time it takes for a system to respond to a
request for a process to begin operation. With a
scheduler written specifically to honor application
priorities, real-time applications can be developed
with a bounded dispatch latency.

58. What are the goals of CPU


scheduling?

Max CPU utilization [Keep CPU as busy as


possible]Fair allocation of CPU.
Max throughput [Number of processes that
complete their execution per time unit]
Min turnaround time [Time taken by a process
to finish execution]
Min waiting time [Time a process waits in ready
queue]
Min response time [Time when a process
produces the first response]

59. What is a critical- section?

When more than one processes access the same


code segment that segment is known as the
critical section. The critical section contains
shared variables or resources which are needed
to be synchronized to maintain the consistency of
data variables. In simple terms, a critical section is
a group of instructions/statements or regions of
code that need to be executed atomically such as
accessing a resource (file, input or output port,
global data, etc.).

60. Write the name of synchronization


techniques?

Mutexes
Condition variables
Semaphores
File locks

Intermediate OS Interview
Questions
61. Write a difference between a user-
level thread and a kernel-level thread?

User-level thread Kernel level thread

User threads are


kernel threads are
implemented by
implemented by OS.
users.

OS doesn’t
Kernel threads are
recognize user-level
recognized by OS.
threads.

Implementation of Implementation of the


User threads is perform kernel thread
easy. is complicated.

Context switch time Context switch time is


is less. more.

Context switch
Hardware support is
requires no
needed.
hardware support.

If one user-level If one kernel thread


thread performs a perform a the blocking
blocking operation operation then
then entire process another thread can
will be blocked. continue execution.

User-level threads Kernel level threads


are designed as are designed as
dependent threads. independent threads.

62. Write down the advantages of


multithreading?

Some of the most important benefits of MT are:

Improved throughput. Many concurrent compute


operations and I/O requests within a single
process.
Simultaneous and fully symmetric use of
multiple processors for computation and I/O.
Superior application responsiveness. If a
request can be launched on its own thread,
applications do not freeze or show the
“hourglass”. An entire application will not block
or otherwise wait, pending the completion of
another request.
Improved server responsiveness. Large or
complex requests or slow clients don’t block
other requests for service. The overall
throughput of the server is much greater.
Minimized system resource usage. Threads
impose minimal impact on system resources.
Threads require less overhead to create,
maintain, and manage than a traditional
process.
Program structure simplification. Threads can
be used to simplify the structure of complex
applications, such as server-class and
multimedia applications. Simple routines can be
written for each activity, making complex
programs easier to design and code, and more
adaptive to a wide variation in user demands.
Better communication. Thread synchronization
functions can be used to provide enhanced
process-to-process communication. In addition,
sharing large amounts of data through separate
threads of execution within the same address
space provides extremely high-bandwidth, low-
latency communication between separate tasks
within an application

63. Difference between Multithreading


and Multitasking?

S.No Multi-threading Multi-tasking

Multiple threads
are executing at
Several programs
the same time at
1. are executed
the same or
concurrently.
different part of
the program.

CPU switches
CPU switches
between multiple
2. between multiple
tasks and
threads.
processes.

It is the process It is a
3. of a lightweight heavyweight
part. process.

It is a feature of It is a feature of
4.
the process. the OS.

Multitasking is
Multi-threading is
sharing of
sharing of
computing
computing
5. resources(CPU,
resources among
memory, devices,
threads of a
etc.) among
single process.
processes.

64. What are the drawbacks of


semaphores?

Priority Inversion is a big limitation of


semaphores.
Their use is not enforced but is by convention
only.
The programmer has to keep track of all calls to
wait and signal the semaphore.
With improper use, a process may block
indefinitely. Such a situation is called Deadlock.

65. What is Peterson’s approach?

It is a concurrent programming algorithm. It is


used to synchronize two processes that maintain
the mutual exclusion for the shared resource. It
uses two variables, a bool array flag of size 2 and
an int variable turn to accomplish it.

66. Define the term Bounded waiting?

A system is said to follow bounded


waiting conditions if a process wants to enter into
a critical section will enter in some finite time.

67. What are the solutions to the critical


section problem?

There are three solutions to the critical section


problem:

Software solutions
Hardware solutions
Semaphores

68. What is a Banker’s algorithm?

The banker’s algorithm is a resource allocation


and deadlock avoidance algorithm that tests for
safety by simulating the allocation for the
predetermined maximum possible amounts of all
resources, then makes an “s-state” check to test
for possible activities, before deciding whether
allocation should be allowed to continue.

69. What is concurrency?

A state in which a process exists simultaneously


with another process than those it is said to be
concurrent.

70. Write a drawback of concurrency?

It is required to protect multiple applications


from one another.
It is required to coordinate multiple applications
through additional mechanisms.
Additional performance overheads and
complexities in operating systems are required
for switching among applications.
Sometimes running too many applications
concurrently leads to severely degraded
performance.

71. What are the necessary conditions


which can lead to a deadlock in a
system?
Mutual Exclusion: There is a resource that
cannot be shared.
Hold and Wait: A process is holding at least one
resource and waiting for another resource, which
is with some other process.
No Preemption: The operating system is not
allowed to take a resource back from a process
until the process gives it back.
Circular Wait: A set of processes waiting for each
other in circular form.

72. What are the issues related to


concurrency?

Non-atomic: Operations that are non-atomic


but interruptible by multiple processes can
cause problems.
Race conditions: A race condition occurs of the
outcome depends on which of several
processes gets to a point first.
Blocking: Processes can block waiting for
resources. A process could be blocked for a
long period of time waiting for input from a
terminal. If the process is required to
periodically update some data, this would be
very undesirable.
Starvation: It occurs when a process does not
obtain service to progress.
Deadlock: It occurs when two processes are
blocked and hence neither can proceed to
execute

73. Why do we use precedence graphs?

A precedence graph is a directed acyclic graph


that is used to show the execution level of several
processes in the operating system. It has the
following properties also:

Nodes of graphs correspond to individual


statements of program code.
An edge between two nodes represents the
execution order.
A directed edge from node A to node B shows
that statement A executes first and then
Statement B executes

74. Explain the resource allocation


graph?

The resource allocation graph is explained to us


what is the state of the system in terms of
processes and resources. One of the advantages
of having a diagram is, sometimes it is possible to
see a deadlock directly by using RAG.

75. What is a deadlock?

Deadlock is a situation when two or more


processes wait for each other to finish and none of
them ever finish. Consider an example when two
trains are coming toward each other on the same
track and there is only one track, none of the
trains can move once they are in front of each
other. A similar situation occurs in operating
systems when there are two or more processes
that hold some resources and wait for resources
held by other(s).

76. What is the goal and functionality of


memory management?

The goal and functionality of memory


management are as follows;

Relocation
Protection
Sharing
Logical organization
Physical organization

77. Write a difference between physical


address and logical address?

Logical Physical
S.NO. Parameters
address Address

It is the
The physical
virtual
address is a
1. Basic address
location in a
generated
memory unit.
by CPU.

Set of all
logical
Set of all
addresses
physical
generated
addresses
by the
mapped to
CPU in
the
reference
2. Address corresponding
to a
logical
program
addresses is
is referred
referred to as
to as
a Physical
Logical
Address.
Address
Space.

The user
The user can
can view
never view
the logical
3. Visibility the physical
address
address of the
of a
program
program.

The user
uses the
The user can
logical
not directly
address
4. Access access the
to access
physical
the
address
physical
address.

The
Logical
Physical
Address
Address is
5. Generation is
Computed by
generated
MMU
by the
CPU

78. Explain address binding?

The Association of program instruction and data to


the actual physical memory locations is called
Address Binding.

79. Write different types of address


binding?

Address Binding is divided into three types as


follows.

Compile-time Address Binding


Load time Address Binding
Execution time Address Binding

80. Write an advantage of dynamic


allocation algorithms?

When we do not know how much amount of


memory would be needed for the program
beforehand.
When we want data structures without any
upper limit of memory space.
When you want to use your memory space
more efficiently.
Dynamically created lists insertions and
deletions can be done very easily just by the
manipulation of addresses whereas in the case
of statically allocated memory insertions and
deletions lead to more movements and wastage
of memory.
When you want to use the concept of structures
and linked lists in programming, dynamic
memory allocation is a must

81. Write a difference between internal


fragmentation and external
fragmentation?

Internal External
S.NO
fragmentation fragmentation

In internal
In external
fragmentation
fragmentation,
fixed-sized
variable-sized
memory,
1. memory blocks
blocks square
square measure
measure
appointed to the
appointed to
method.
process.

Internal
fragmentation External
happens when fragmentation
2. the method or happens when the
process is method or process
larger than the is removed.
memory.

The solution to
Solution for external
internal
fragmentation is
3. fragmentation
compaction, paging
is the best-fit
and segmentation.
block.

External
Internal
fragmentation
fragmentation
occurs when
occurs when
memory is divided
4. memory is
into variable-size
divided into
partitions based on
fixed-sized
the size of
partitions.
processes.

The unused spaces


The difference
formed between
between
non-contiguous
memory
memory fragments
allocated and
5. are too small to
required space
serve a new
or memory is
process, which is
called Internal
called External
fragmentation.
fragmentation.

82. Define the Compaction?

The process of collecting fragments of available


memory space into contiguous blocks by moving
programs and data in a computer’s memory or
disk.

83. Write about the advantages and


disadvantages of a hashed-page table?

Advantages

The main advantage is synchronization.


In many situations, hash tables turn out to be
more efficient than search trees or any other
table lookup structure. For this reason, they are
widely used in many kinds of computer
software, particularly for associative arrays,
database indexing, caches, and sets.

Disadvantages

Hash collisions are practically unavoidable.


when hashing a random subset of a large set of
possible keys.
Hash tables become quite inefficient when there
are many collisions.
Hash table does not allow null values, like a
hash map.
Define Compaction.

84. Write a difference between paging


and segmentation?

S.NO Paging Segmentation

In paging,
program is In segmentation, the
divided into program is divided
1.
fixed or into variable-size
mounted-size sections.
pages.

For the paging


For segmentation
operating
2. compiler is
system is
accountable.
accountable.

Page size is Here, the section


3. determined by size is given by the
hardware. user.

It is faster in
Segmentation is
4. comparison of
slow.
segmentation.

Paging could
Segmentation could
result in
5. result in external
internal
fragmentation.
fragmentation.

In paging,
logical
Here, logical
address is
address is split into
6. split into that
section number and
page number
section offset.
and page
offset.

Paging
While segmentation
comprises a
also comprises the
page table
segment table which
7. which
encloses the
encloses the
segment number
base address
and segment offset.
of every page.

A page table
Section Table
is employed to
8. maintains the
keep up the
section data.
page data.

In paging, In segmentation, the


operating operating system
9. system must maintains a list of
maintain a holes in the main
free frame list. memory.

Paging is
Segmentation is
10. invisible to the
visible to the user.
user.

In paging,
In segmentation, the
processor
processor uses
needs page
segment number,
11. number, offset
and offset to
to calculate
calculate the full
the absolute
address.
address.

85. Write a definition of Associative


Memory and Cache Memory?

Associative
S.No. Cache Memory
Memory

A memory unit
accessed by Fast and small
1 content is called memory is called
associative cache memory.
memory.

It reduces the
time required to It reduces the
2 find the item average memory
stored in access time.
memory.

Here data is Here, data are


3 accessed by its accessed by
content. their address.

It is used when a
It is used where particular group
4 search time is of data is
very short. accessed
repeatedly.

Its basic
characteristic is Its basic
5 its logic circuit for characteristic is
matching its its fast access
content.

86. What is “Locality of reference”?

The locality of reference refers to a phenomenon


in which a computer program tends to access the
same set of memory locations for a particular time
period. In other words, Locality of Reference
refers to the tendency of the computer program to
access instructions whose addresses are near
one another.

87. Write down the advantages of virtual


memory?

A higher degree of multiprogramming.


Allocating memory is easy and cheap
Eliminates external fragmentation
Data (page frames) can be scattered all over
the PM
Pages are mapped appropriately anyway
Large programs can be written, as the virtual
space available is huge compared to physical
memory.
Less I/O required leads to faster and easy
swapping of processes.
More physical memory is available, as
programs are stored on virtual memory, so they
occupy very less space on actual physical
memory.
More efficient swapping

88. How to calculate performance in


virtual memory?

The performance of virtual memory of a virtual


memory management system depends on the
total number of page faults, which depend on
“paging policies” and “frame allocation”.

Effective access time = (1-p) x Memory access


time + p x page fault time

89. Write down the basic concept of the


file system?

A file is a collection of related information that is


recorded on secondary storage. Or file is a
collection of logically related entities. From the
user’s perspective, a file is the smallest allotment
of logical secondary storage.

90. Write the names of different


operations on file?

Operation on file:

Create
Open
Read
Write
Rename
Delete
Append
Truncate
Close

91. Define the term Bit-Vector?

A Bitmap or Bit Vector is a series or collection of


bits where each bit corresponds to a disk block.
The bit can take two values: 0 and 1: 0 indicates
that the block is allocated and 1 indicates a free
block.

92. What is a File allocation table?

FAT stands for File Allocation Table and this is


called so because it allocates different files and
folders using tables. This was originally designed
to handle small file systems and disks. A file
allocation table (FAT) is a table that an operating
system maintains on a hard disk that provides a
map of the cluster (the basic units of logical
storage on a hard disk) that a file has been stored
in.

93. What is rotational latency?

Rotational Latency: Rotational Latency is the time


taken by the desired sector of the disgeek to
rotate into a position so that it can access the
read/write heads. So the disk scheduling algorithm
that gives minimum rotational latency is better.

94. What is seek time?

Seek Time: Seek time is the time taken to locate


the disk arm to a specified track where the data is
to be read or written. So the disk scheduling
algorithm that gives a minimum average seek time
is better.

Advanced OS Interview
Questions
95. What is Belady’s Anomaly?

Bélády’s anomaly is an anomaly with some page


replacement policies increasing the number of
page frames resulting in an increase in the
number of page faults. It occurs when the First In
First Out page replacement is used.

96. What happens if a non-recursive


mutex is locked more than once?

Deadlock. If a thread that had already locked a


mutex, tries to lock the mutex again, it will enter
into the waiting list of that mutex, which results in
a deadlock. It is because no other thread can
unlock the mutex. An operating system
implementer can exercise care in identifying the
owner of the mutex and return it if it is already
locked by the same thread to prevent deadlocks.

97. What are the advantages of a


multiprocessor system?

There are some main advantages of a


multiprocessor system:

Enhanced performance.
Multiple applications.
Multi-tasking inside an application.
High throughput and responsiveness.
Hardware sharing among CPUs.

98. What are real-time systems?

A real-time system means that the system is


subjected to real-time, i.e., the response should
be guaranteed within a specified timing constraint
or the system should meet the specified deadline.

99. How to recover from a deadlock?

We can recover from a deadlock by following


methods:

Process termination
Abort all the deadlock processes
Abort one process at a time until the deadlock
is eliminated

Resource preemption
Rollback
Selecting a victim

100. What factors determine whether a


detection algorithm must be utilized in a
deadlock avoidance system?

One is that it depends on how often a deadlock is


likely to occur under the implementation of this
algorithm. The other has to do with how many
processes will be affected by deadlock when this
algorithm is applied.

101. Explain the resource allocation


graph?

The resource allocation graph is explained to us


what is the state of the system in terms of
processes and resources. One of the advantages
of having a diagram is, sometimes it is possible to
see a deadlock directly by using RAG.

Also check: Last Minute Notes – Operating


Systems

We will soon be covering more Operating System


questions.

Conclusion
In conclusion, the field of operating systems is a
crucial aspect of computer science, and a
thorough understanding of its concepts is
essential for anyone looking to excel in this area.
By reviewing the Top 2024 100+ operating
systems interview questions we have compiled,
you can gain a deeper understanding of the key
principles and concepts of OS and be better
prepared to tackle any interview questions that
may come your way. Remember to study and
practice regularly, and use these questions as a
starting point to delve deeper into the complex
world of operating systems. With dedication and
hard work, you can become an expert in this field
and succeed in any OS-related job or interview.

Article Tags : Operating Systems interview-questions

Recommended Articles
1. System Protection in Operating System
2. User View Vs Hardware View Vs System View
of Operating System
3. File System Implementation in Operating
System
4. Xv6 Operating System -adding a new system
call
5. Traps and System Calls in Operating System
(OS)
6. Difference between System Software and
Operating System
7. Fork System Call in Operating System
8. System Programs in Operating System
9. Operating Systems - GATE CSE Previous Year
Questions
10. Multiple-Processor Scheduling in Operating
System
11. Banker's Algorithm in Operating System
12. Swap Space in Operating System
13. Thread Models in Operating System
14. Free space management in Operating System
15. The Tempo Operating System
16. Boot Block in Operating System
17. Privileged and Non-Privileged Instructions in
Operating System
18. Recovery from Deadlock in Operating System
19. Comparison on using Java for an Operating
System instead of C
20. Difference between Loading and Linking in
Operating System
21. Access matrix in Operating System
22. Difference between Operating System and
Kernel
23. Stack Implementation in Operating System
uses by Processor
24. Best-Fit Allocation in Operating System
25. Thread Control Block in Operating System

Read Full Article

A-143, 9th Floor, Sovereign Corporate


Tower, Sector-136, Noida, Uttar Pradesh -
201305

Company
About Us
Legal
Careers
In Media
Contact Us
Advertise with us
GFG Corporate Solution
Placement Training Program

Explore
Job-A-Thon Hiring Challenge
Hack-A-Thon
GfG Weekly Contest
Offline Classes (Delhi/NCR)
DSA in JAVA/C++
Master System Design
Master CP
GeeksforGeeks Videos
Geeks Community

Languages
Python
Java
C++
PHP
GoLang
SQL
R Language
Android Tutorial

DSA
Data Structures
Algorithms
DSA for Beginners
Basic DSA Problems
DSA Roadmap
DSA Interview Questions
Competitive Programming

Data Science & ML


Data Science With Python
Data Science For Beginner
Machine Learning Tutorial
ML Maths
Data Visualisation Tutorial
Pandas Tutorial
NumPy Tutorial
NLP Tutorial
Deep Learning Tutorial

Web Technologies
HTML
CSS
JavaScript
TypeScript
ReactJS
NextJS
NodeJs
Bootstrap
Tailwind CSS

Python Tutorial
Python Programming Examples
Django Tutorial
Python Projects
Python Tkinter
Web Scraping
OpenCV Tutorial
Python Interview Question

Computer Science
GATE CS Notes
Operating Systems
Computer Network
Database Management System
Software Engineering
Digital Logic Design
Engineering Maths

DevOps
Git
AWS
Docker
Kubernetes
Azure
GCP
DevOps Roadmap

System Design
High Level Design
Low Level Design
UML Diagrams
Interview Guide
Design Patterns
OOAD
System Design Bootcamp
Interview Questions

School Subjects
Mathematics
Physics
Chemistry
Biology
Social Science
English Grammar

Commerce
Accountancy
Business Studies
Economics
Management
HR Management
Finance
Income Tax

Databases
SQL
MYSQL
PostgreSQL
PL/SQL
MongoDB

Preparation Corner
Company-Wise Recruitment Process
Resume Templates
Aptitude Preparation
Puzzles
Company-Wise Preparation
Companies
Colleges

Competitive Exams
JEE Advanced
UGC NET
UPSC
SSC CGL
SBI PO
SBI Clerk
IBPS PO
IBPS Clerk

More Tutorials
Software Development
Software Testing
Product Management
Project Management
Linux
Excel
All Cheat Sheets

Free Online Tools


Typing Test
Image Editor
Code Formatters
Code Converters
Currency Converter
Random Number Generator
Random Password Generator

Write & Earn


Write an Article
Improve an Article
Pick Topics to Write
Share your Experiences
Internships

@GeeksforGeeks, Sanchhaya Education Private Limited,


All rights reserved Open In App

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy