Extra_os_questions Reference
Extra_os_questions Reference
Processes can be in one of three states: running, ready, or waiting. The running state means that the
process has all the resources it needs for execution and it has been given permission by the
operating system to use the processor. Only one process can be in the running state at any given
time. The remaining processes are either in a waiting state (i.e., waiting for some external event to
occur such as user input or disk access) or a ready state (i.e., waiting for permission to use the
processor). In a real operating system, the waiting and ready states are implemented as queues that
hold the processes in these states.
3. What is a Thread?
A thread is a single sequence stream within a process. Because threads have some of the properties
of processes, they are sometimes called lightweight processes. Threads are a popular way to improve
the application through parallelism. For example, in a browser, multiple tabs can be different threads.
MS word uses multiple threads, one thread to format the text, another thread to process inputs, etc.
A thread has its own program counter (PC), a register set, and a stack space. Threads are not
independent of one another, like processes. As a result, threads share with other threads their code
section, data section, and OS resources like open files and signals.
6. What is Thrashing?
Thrashing is a situation when the performance of a computer degrades or collapses. Thrashing
occurs when a system spends more time processing page faults than executing transactions. While
processing page faults is necessary in order to appreciate the benefits of virtual memory, thrashing
has a negative effect on the system. As the page fault rate increases, more transactions need
processing from the paging device. The queue at the paging device increases, resulting in increased
service time for a page fault.
Deadlock. If a thread that had already locked a mutex, tries to lock the mutex again, it will enter into
the waiting list of that mutex, which results in a deadlock. It is because no other thread can unlock
the mutex. An operating system implementer can exercise care in identifying the owner of the mutex
and return it if it is already locked by the same thread to prevent deadlocks.
An operating system acts as an intermediary between the user of a computer and computer
hardware. The purpose of an operating system is to provide an environment in which a user can
execute programs conveniently and efficiently.
An operating system is software that manages computer hardware. The hardware must provide
appropriate mechanisms to ensure the correct operation of the computer system and to prevent
user programs from interfering with the proper operation of the system.
The process of loading the page into memory on demand (whenever page fault occurs) is known as
demand paging.
Enhanced performance.
Multiple applications.
A kernel is the central component of an operating system that manages the operations of computers
and hardware. It basically manages operations of memory and CPU time. It is a core component of
an operating system. Kernel acts as a bridge between applications and data processing performed at
the hardware level using inter-process communication and system calls.
A real-time system means that the system is subjected to real-time, i.e., the response should be
guaranteed within a specified timing constraint or the system should meet the specified deadline.
Virtual memory creates an illusion that each user has one or more contiguous address spaces, each
beginning at address zero. The sizes of such virtual address spaces are generally very high. The idea
of virtual memory is to use disk space to extend the RAM. Running processes don’t need to care
whether the memory is from RAM or disk. The illusion of such a large amount of memory is created
by subdividing the virtual memory into smaller pieces, which can be loaded into physical memory
whenever they are needed by a process.
Multi-programming increases CPU utilization by organizing jobs (code and data) so that the CPU
always has one to execute. The main objective of multi-programming is to keep multiple jobs in the
main memory. If one job gets occupied with IO, CPU can be assigned to other jobs.
Time-sharing is a logical extension of multiprogramming. The CPU performs many tasks by switches
that are so frequent that the user can interact with each program while it is running. A time-shared
operating system allows multiple users to share computers simultaneously.
A thread is a path of execution within a process. A process can contain multiple threads.
A thread is also known as lightweight process. The idea is to achieve parallelism by dividing a process
into multiple threads. Threads within the same process run in shared memory space,
FCFS stands for First Come First Serve. In the FCFS scheduling algorithm, the job that arrived first in
the ready queue is allocated to the CPU and then the job that came second and so on. FCFS is a non-
preemptive scheduling algorithm as a process that holds the CPU until it either terminates or
performs I/O. Thus, if a longer job has been assigned to the CPU then many shorter jobs after it will
have to wait.
22. What is the RR scheduling algorithm?
A round-robin scheduling algorithm is used to schedule the process fairly for each job a time slot or
quantum and interrupting the job if it is not completed by then the job comes after the other job
which is arrived in the quantum time that makes these scheduling fairly.
23. What are the necessary conditions which can lead to a deadlock in a system?
A redundant array of independent disks is a set of several physical disk drives that the operating
system sees as a single logical unit. It played a significant role in narrowing the gap between
increasingly fast processors and slow disk drives. RAID has different levels:
Level-0
Level-1
Level-2
Level-3
Level-4
Level-5
Level-6
The banker’s algorithm is a resource allocation and deadlock avoidance algorithm that tests for
safety by simulating the allocation for predetermined maximum possible amounts of all resources,
then makes an “s-state” check to test for possible activities, before deciding whether allocation
should be allowed to continue.
26. What factors determine whether a detection algorithm must be utilized in a deadlock avoidance
system?
One is that it depends on how often a deadlock is likely to occur under the implementation of this
algorithm. The other has to do with how many processes will be affected by deadlock when this
algorithm is applied.
27. State the main difference between logical and physical address space?
Logical Address Space is a set of all Physical Address is a set of all physical
Address
logical addresses generated by the addresses mapped to the
Space
CPU in reference to a program. corresponding logical addresses.
Users can view the logical address of a Users can never view the physical
Visibility
program. address of the program.
The user can use the logical address to The user can indirectly access physical
Access
access the physical address. address but not directly.
28. How does dynamic loading aid in better memory space utilization?
With dynamic loading, a routine is not loaded until it is called. This method is especially useful when
large amounts of code are needed in order to handle infrequently occurring cases such as error
routines.
The concept of overlays is that whenever a process is running it will not use the complete program at
the same time, it will use only some part of it. Then overlays concept says that whatever part you
required, you load it and once the part is done, then you just unload it, which means just pull it back
and get the new part you required and run it. Formally, “The process of transferring a block of
program code or other data into internal memory, replacing what is already stored”.
Processes are stored and remove from memory, which makes free memory space, which is too little
to even consider utilizing by different processes. Suppose, that process is not ready to dispense to
memory blocks since its little size and memory hinder consistently stay unused is called
fragmentation. This kind of issue occurs during a dynamic memory allotment framework when free
blocks are small, so it can’t satisfy any request.
Paging is a method or technique which is used for non-contiguous memory allocation. It is a fixed-
size partitioning theme (scheme). In paging, both main memory and secondary memory are divided
into equal fixed-size partitions. The partitions of the secondary memory area unit and the main
memory area unit are known as pages and frames respectively.
Paging is a memory management method accustomed fetch processes from the secondary memory
into the main memory in the form of pages. in paging, each process is split into parts wherever the
size of every part is the same as the page size. The size of the last half could also be but the page size.
The pages of the process area unit hold on within the frames of main memory relying upon their
accessibility
Bounded-buffer
Readers-writers
Dining philosophers
Sleeping barber
The direct Access method is based on a disk model of a file, such that it is viewed as a numbered
sequence of blocks or records. It allows arbitrary blocks to be read or written. Direct access is
advantageous when accessing large amounts of information. Direct memory access (DMA) is a
method that allows an input/output (I/O) device to send or receive data directly to or from the main
memory, bypassing the CPU to speed up memory operations. The process is managed by a chip
known as a DMA controller (DMAC).
Thrashing occurs when processes on the system frequently access pages not available memory.
36. What is the best page size when designing an operating system?
The best paging size varies from system to system, so there is no single best when it comes to page
size. There are different factors to consider in order to come up with a suitable page size, such as
page table, paging time, and its effect on the overall efficiency of the operating system.
The cache is a smaller and faster memory that stores copies of the data from frequently used main
memory locations. There are various different independent caches in a CPU, which store instructions
and data. Cache memory is used to reduce the average time to access data from the Main memory.
39. What is spooling?
Spooling refers to simultaneous peripheral operations online, spooling refers to putting jobs in a
buffer, a special area in memory, or on a disk where a device can access them when it is ready.
Spooling is useful because devices access data at different rates.
The Assembler is used to translate the program written in Assembly language into machine code. The
source program is an input of an assembler that contains assembly language instructions. The output
generated by the assembler is the object code or machine code understandable by the computer.
The interrupts are a signal emitted by hardware or software when a process or an event needs
immediate attention. It alerts the processor to a high-priority process requiring interruption of the
current working process. In I/O devices one of the bus control lines is dedicated for this purpose and
is called the Interrupt Service Routine (ISR).
GUI is short for Graphical User Interface. It provides users with an interface wherein actions can be
performed by interacting with icons and graphical symbols.
Preemptive multitasking is a type of multitasking that allows computer programs to share operating
systems (OS) and underlying hardware resources. It divides the overall operating and computing time
between processes, and the switching of resources between different processes occurs through
predefined criteria.
A Pipe is a technique used for inter-process communication. A pipe is a mechanism by which the
output of one process is directed into the input of another process. Thus it provides a one-way flow
of data between two related processes.
Easy to implement.
Bootstrapping is the process of loading a set of instructions when a computer is first turned on or
booted. During the startup process, diagnostic tests are performed, such as the power-on self-test
(POST), that set or check configurations for devices and implement routine testing for the connection
of peripherals, hardware, and external memory devices. The bootloader or bootstrap program is
then loaded to initialize the OS.
Inter-process communication (IPC) is a mechanism that allows processes to communicate with each
other and synchronize their actions. The communication between these processes can be seen as a
method of co-operation between them.
Message Queuing –
This allows messages to be passed between processes using either a single queue or several message
queues. This is managed by the system kernel these messages are coordinated using an API.
Semaphores –
This is used in solving problems associated with synchronization and to avoid race conditions. These
are integer values that are greater than or equal to 0.
Shared memory –
This allows the interchange of data through a defined area of memory. Semaphore values have to be
obtained before data can get access to shared memory.
Sockets –
This method is mostly used to communicate over a network between a client and a server. It allows
for a standard connection which is computer and OS independent
In preemptive scheduling, the CPU is allocated to the processes for a limited time whereas, in Non-
preemptive scheduling, the CPU is allocated to the process till it terminates or switches to waiting for
state.
The executing process in preemptive scheduling is interrupted in the middle of execution when
higher priority one comes whereas, the executing process in non-preemptive scheduling is not
interrupted in the middle of execution and waits till its execution.
In Preemptive Scheduling, there is the overhead of switching the process from the ready state to
running state, vise-verse, and maintaining the ready queue. Whereas the case of non-preemptive
scheduling has no overhead of switching the process from running state to ready state.
In preemptive scheduling, if a high-priority process frequently arrives in the ready queue then the
process with low priority has to wait for a long, and it may have to starve. On the other hand, in the
non-preemptive scheduling, if CPU is allocated to the process having a larger burst time then the
processes with small burst time may have to starve.
Preemptive scheduling attains flexibility by allowing the critical processes to access the CPU as they
arrive into the ready queue, no matter what process is executing currently. Non-preemptive
scheduling is called rigid as even if a critical process enters the ready queue the process running CPU
is not disturbed.
Preemptive Scheduling has to maintain the integrity of shared data that’s why it is cost associative it
which is not the case with Non-preemptive Scheduling.
A process that has finished the execution but still has an entry in the process table to report to its
parent process is known as a zombie process. A child process always first becomes a zombie before
being removed from the process table. The parent process reads the exit status of the child process
which reaps off the child process entry from the process table.
A process whose parent process no more exists i.e. either finished or terminated without waiting for
its child process to terminate is called an orphan process.
Starvation: Starvation is a resource management problem where a process does not get the
resources it needs for a long time because the resources are being allocated to other processes.
Aging: Aging is a technique to avoid starvation in a scheduling system. It works by adding an aging
factor to the priority of each request. The aging factor must increase the priority of the request as
time passes and must ensure that a request will eventually be the highest priority request
Apart from microkernel, Monolithic Kernel is another classification of Kernel. Like microkernel, this
one also manages system resources between application and hardware, but user services and kernel
services are implemented under the same address space. It increases the size of the kernel, thus
increases the size of an operating system as well. This kernel provides CPU scheduling, memory
management, file management, and other operating system functions through system calls. As both
services are implemented under the same address space, this makes operating system execution
faster.
Switching of CPU to another process means saving the state of the old process and loading the saved
state for the new process. In Context Switching the process is stored in the Process Control Block to
serve the new process, so that the old process can be resumed from the same part it was left.
55. What is the difference between the Operating system and kernel?
Operating System provides interface b/w user The kernel provides interface b/w application
and hardware. and hardware.
It is the first program to load when the computer It is the first program to load when the
boots up. operating system loads
S.N
O Process Thread
The process has its own Process Control Thread has Parents’ PCB, its own Thread Control
7. Block, Stack and Address Space. Block and Stack and common Address space.
the process control block (PCB) is a block that is used to track the process’s execution status. A
process control block (PCB) contains information about the process, i.e. registers, quantum, priority,
etc. The process table is an array of PCBs, that means logically contains a PCB for all of the current
processes in the system.
The set of dispatchable processes is in a safe state if there exists at least one temporal order in which
all processes can be run to completion without resulting in a deadlock.
cycle stealing is a method of accessing computer memory (RAM) or bus without interfering with the
CPU. It is similar to direct memory access (DMA) for allowing I/O controllers to read or write RAM
without CPU intervention.
A trap is a software interrupt, usually the result of an error condition, and is also a non-maskable
interrupt and has the highest priority Trapdoor is a secret undocumented entry point into a program
used to grant access without normal methods of access authentication.
Sr.No
. Program Process
Program does not have any resource Process has a high resource requirement, it
5. requirement, it only requires memory needs resources like CPU, memory address,
space for storing the instructions. I/O during its lifetime.
62.What is a dispatcher?
The dispatcher is the module that gives process control over the CPU after it has been selected by
the short-term scheduler. This function involves the following:
Switching context
Jumping to the proper location in the user program to restart that program
A Dispatch latency can be described as the amount of time it takes for a system to respond to a
request for a process to begin operation. With a scheduler written specifically to honor application
priorities, real-time applications can be developed with a bounded dispatch latency.
Max throughput [Number of processes that complete their execution per time unit]
When more than one processes access the same code segment that segment is known as the critical
section. The critical section contains shared variables or resources which are needed to be
synchronized to maintain consistency of data variables. In simple terms, a critical section is a group of
instructions/statements or regions of code that need to be executed atomically such as accessing a
resource (file, input or output port, global data, etc.).
Condition variables
Semaphores
File locks
User threads are implemented by users. kernel threads are implemented by OS.
User-level threads are designed as dependent Kernel level threads are designed as independent
threads. threads.
Improved throughput. Many concurrent compute operations and I/O requests within a single
process.
Simultaneous and fully symmetric use of multiple processors for computation and I/O.
Superior application responsiveness. If a request can be launched on its own thread, applications do
not freeze or show the “hourglass”. An entire application will not block or otherwise wait, pending
the completion of another request.
Improved server responsiveness. Large or complex requests or slow clients don’t block other
requests for service. The overall throughput of the server is much greater.
Minimized system resource usage. Threads impose minimal impact on system resources. Threads
require less overhead to create, maintain, and manage than a traditional process.
Program structure simplification. Threads can be used to simplify the structure of complex
applications, such as server-class and multimedia applications. Simple routines can be written for
each activity, making complex programs easier to design and code, and more adaptive to a wide
variation in user demands.
Better communication. Thread synchronization functions can be used to provide enhanced process-
to-process communication. In addition, sharing large amounts of data through separate threads of
execution within the same address space provides extremely high-bandwidth, low-latency
communication between separate tasks within an application
The programmer has to keep track of all calls to wait and to signal the semaphore.
With improper use, a process may block indefinitely. Such a situation is called Deadlock.
Software solutions
Hardware solutions
Semaphores
The banker’s algorithm is a resource allocation and deadlock avoidance algorithm that tests for
safety by simulating the allocation for predetermined maximum possible amounts of all resources,
then makes an “s-state” check to test for possible activities, before deciding whether allocation
should be allowed to continue.
A state in which a process exists simultaneously with another process than those it is said to be
concurrent.
Additional performance overheads and complexities in operating systems are required for switching
among applications.
Sometimes running too many applications concurrently leads to severely degraded performance.
Non-atomic –
Operations that are non-atomic but interruptible by multiple processes can cause problems.
Race conditions –
A race condition occurs of the outcome depends on which of several processes gets to a point first.
Blocking –
Processes can block waiting for resources. A process could be blocked for a long period of time
waiting for input from a terminal. If the process is required to periodically update some data, this
would be very undesirable.
Starvation –
It occurs when a process does not obtain service to progress.
Deadlock –
It occurs when two processes are blocked and hence neither can proceed to execute
A directed edge from node A to node B shows that statement A executes first and then Statement B
executes
The resource allocation graph is explained to us what is the state of the system in terms of processes
and resources. One of the advantages of having a diagram is, sometimes it is possible to see a
deadlock directly by using RAG.
Process termination
Resource preemption
Rollback
Selecting a victim
Relocation
Protection
Sharing
Logical organization
Physical organization
The user can view the logical The user can never view physical
3. Visibility
address of a program. address of program
The user uses the logical address The user can not directly access
4. Access
to access the physical address. physical address
The Association of program instruction and data to the actual physical memory locations is called the
Address Binding.
When we do not know how much amount of memory would be needed for the program beforehand.
When we want data structures without any upper limit of memory space.
Dynamically created lists insertions and deletions can be done very easily just by the manipulation of
addresses whereas in case of statically allocated memory insertions and deletions lead to more
movements and wastage of memory.
When you want you to use the concept of structures and linked list in programming, dynamic
memory allocation is a must
The difference between memory The unused spaces formed between non-
allocated and required space or contiguous memory fragments are too small to
5.
memory is called Internal serve a new process, is called External
fragmentation. fragmentation.
The process of collecting fragments of available memory space into contiguous blocks by moving
programs and data in a computer’s memory or disk.
88. Write about the advantages and disadvantages of a hashed page table?
Advantages
In many situations, hash tables turn out to be more efficient than search trees or any other table
lookup structure. For this reason, they are widely used in many kinds of computer software,
particularly for associative arrays, database indexing, caches, and sets.
Disadvantages
Hash collisions are practically unavoidable. when hashing a random subset of a large set of possible
keys.
Hash tables become quite inefficient when there are many collisions.
Hash table does not allow null values, like a hash map.
89. Write a difference between paging and segmentation?
3. Page size is determined by hardware. Here, the section size is given by the user.
In paging, logical address is split into Here, logical address is split into section number
6.
that page number and page offset. and section offset.
Paging comprises a page table which While segmentation also comprises the segment
7. encloses the base address of every table which encloses segment number and
page. segment offset.
S.No
. Associative Memory Cache Memory
A memory unit access by content is called Fast and small memory is called cache
1
associative memory. memory.
3 Here data is accessed by its content. Here, data are accessed by its address.
The locality of reference refers to a phenomenon in which a computer program tends to access the
same set of memory locations for a particular time period. In other words, Locality of
Reference refers to the tendency of the computer program to access instructions whose addresses
are near one another.
More physical memory is available, as programs are stored on virtual memory, so they occupy very
less space on actual physical memory.
The performance of virtual memory of a virtual memory management system depends on the total
number of page faults, which depend on “paging policies” and “frame allocation”.
Effective access time = (1-p) x Memory access time + p x page fault time
Operation on file:
Create
Open
Read
Write
Rename
Delete
Append
Truncate
Close
A Bitmap or Bit Vector is a series or collection of bits where each bit corresponds to a disk block. The
bit can take two values: 0 and 1: 0 indicates that the block is allocated and 1 indicates a free block.
FAT stands for File Allocation Table and this is called so because it allocates different files and folders
using tables. This was originally designed to handle small file systems and disks. A file allocation table
(FAT) is a table that an operating system maintains on a hard disk that provides a map of the cluster
(the basic units of logical storage on a hard disk) that a file has been stored in.
Seek Time: Seek time is the time taken to locate the disk arm to a specified track where the data is to
be read or write. So the disk scheduling algorithm that gives minimum average seek time is better.
A buffer is a memory area that stores data being transferred between two devices or between a
device and an application.