0% found this document useful (0 votes)
25 views22 pages

Threads

Threads
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views22 pages

Threads

Threads
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

Operating Systems

Presentation
Group 2
Applications that benefit from
Threads
● Thread – is a basic unit of CPU utilization, consisting of a
program counter, a slack and a set of registers.
● Threads –allow concurrent execution of multiple units of a
program, numerous applications across various domains.

● Applications that benefit from using threads:


Multithreaded web servers
- often handle multiple concurrent connections from
clients. By
using threads, a web server can handle multiple requests
simultaneously improving responsiveness and overall
server
performance.
GUI Applications
- Graphical User interface (GUI) applications often require
responsiveness
while performing tasks in the background.By using threads,
time-consuming
operations like file processing or network communication can
be offloaded to
separate threads, ensuring the user interface remains
responsiveness.
Video Games
- Modern video games frequently utilize multithreading to
achieve parallelism
and improve performance. Threads can be used to handle
tasks such as
physics simulation, AI computations, rendering and audio
processing concurrently enhancing the gaming experience.
Data Processing and Analysis
- Applications that process and analyze large volumes of data,
such as scientific simulations or financial modeling software,
can leverage threads to divide the workload among multiple
threads. This parallelization can significantly speed up
computation time.
Background Tasks and Services
-Many applications require background tasks or services to run
concurrently with the main program. Threads can be used to
perform tasks like automatic updates, data synchronization, or
periodic maintenance without blocking the main program's
execution.
Distributed Computing
-In distributed computing environments, threads are often used
to perform parallel processing across multiple machines. Each
machine can execute threads to process parts of a larger task,
Real-time Systems
-Real-time applications, such as industrial control systems or
robotics, often require precise timing and responsiveness.
Threads can be used to handle time-critical tasks, ensuring
timely execution and maintaining system responsiveness.

These are just a few examples of the diverse range of


applications that can benefit from using threads. Threads
enable parallelism, concurrency, and improved responsiveness,
making them a valuable tool in various software development
scenarios.
Applications that do not benefit
from Threads
Single Threaded applications
-These are programs that can only execute one task or
instruction at a time (These are applications that perform a
single task at a time and do not involve any concurrent
processing).
- Adding threads to such applications would introduce an
overhead without any performance improvement.

Examples of such an application is a calculator and notepad.


I/O (input/output) bounded applications
-Applications that primarily perform input/output (I/O)
operations, such as reading from or writing the files or network
sockets , may not see significant benefits from threading.
-In I/O bound scenarios, the performance bottleneck is typically
the speed of input/output rather than the processing power.
-In such cases using asynchronous I/O or event-driven
programming models maybe more suitable than threading.
Applications with shared resources
-Applications that access shared resources such as global
variables or common data structures , need to be carefully
synchronized to avoid data corruption.
-Adding threads to such applications can increase the
complexity of synchronization and makes the app more prone
to bugs.
Applications with resource constraints
-Some applications operate under resource constraints such as
limited memory and CPU cores. In such cases introducing
multiple threads might lead to contention and reduce overall
performance due to frequent context switching and competition
for resources.
Resources used in Thread creation
and Process creation
● Generally a process is a program in execution

● When a process is created, it requires resources such as CPU


time, memory, file, and I/O devices to accomplish its tasks 1
The operating system assigns a unique Process Identifier
(PID) to the process and allocates memory space for all the
elements of the process such as program, data, and stack,
including space for its Process Control Block (PCB)

● Note: A PID is a unique number that identifies each running


processes in an operating system, such as Linux, Unix,
macOS, and Microsoft Windows.
● A Process Control Block (PCB) is a data structure used by an
Operating System to manage and regulate how processes
are carried out It contains all the details regarding the
process corresponding to it like its current status, its program
counter, its memory use, its open files, and details about
CPU scheduling.

● With the creation of a process, a PCB is created which


controls how that process is being carried out The PCB is
created with the aim of helping the OS to manage the
enormous amounts of tasks that are being carried out in the
system .

● PCB is helpful in doing that as it helps the OS to actively


monitor the process and redirect system resources to each
process accordingly.
● The OS creates a PCB for every process which is created, and
it contains all the important information about the process .
All this information is afterward used by the OS to manage
processes and run them efficiently

● Process creation takes a lot of CPU time and is called a


heavyweight process

● On the other hand, when a thread is created, it is a part of


the process and shares the memory space of the process
from which it has been created.

● Therefore, no additional resources are used when a thread is


created, instead, it is called a lightweight process.

● Creation of a thread is cheap, whereas process creation is


Usage of CPU time when a process
is created
● When a process is created, the CPU time is used to quantify
the overall empirical efficiency of two functionally identical
algorithms . For example, any sorting algorithm takes an
unsorted list and returns a sorted list, and will do so in a
deterministic number of steps based on a given input list.
However, a bubble sort and a merge sort have different
running time complexity such that merge sort tends to
complete in fewer steps. Without any knowledge of the
workings of either algorithm, a greater CPU time of bubble
sort shows it is less efficient for particular input data than
merge sort
Usage of memory when a process

is created
When a process is created, the memory is allocated to the process by the operating
system . The amount of memory allocated to a process depends on the operating system
and the process itself . The memory allocation is done in two ways: virtual
memory and physical memory.
● Virtual memory is a memory management technique that allows a computer to use more
memory than it physically has available. It does this by temporarily transferring data from
the RAM to the hard disk . When a process is created, the operating system allocates a
block of virtual memory to the process . This block of memory is divided into pages, which
are then mapped to physical memory. The operating system uses a page table to keep
track of which pages are mapped to physical memory.
● Physical memory is the actual memory chips installed in the computer. When a process is
created, the operating system allocates a portion of the physical memory to the
process. The amount of physical memory allocated to a process depends on the operating
system and the process itself. The operating system uses a memory manager to keep track
of which portions of physical memory are allocated to which processes.
Action of kernel to context among
processes
The concept of "Kernel to Context" in the context of processes
typically refers to the actions and operations performed by the
operating system's kernel to manage and switch between
different processes. Here's an overview of the key actions
involved in this process:

1. Context Switching: A context switch is the process of saving


the current state of a running process and restoring the
saved state of another process, allowing multiple processes
to run concurrently on a single CPU. The kernel is
responsible for managing context switches.
Context Switch
● Interrupts cause the operating system to change a CPU from
its current task and to run a kernel routine.
● This means that when processes are being executed , CPU is
being assigned to that particular process .
● That is when an interruption occurs, the CPU has to be
reassigned to that process or the thing that is causing the
interruption. This happens so that it will be executed first ,
afterwards the CPU is reassigned back to the process that
was previously being executed. This usually occurs on
general-purpose systems.
● General process systems are a class of systems that share
common basic organizing principles, regardless of the field
we are talking about.
● When an interrupt occurs, the system needs to save the
current context of the process running on the CPU so that it
● Switching the CPU to another process requires performing a
state save of the current process and a state restore of a
different process. This task is known as a context switch.

● . Context-switch times are highly dependent on hardware


support. Hardware support refers to the assistance provided
for computer hardware-related issues.

● When a context switch occurs, the kernel saves the context


of the old process in its PCB and loads the saved context of
the new process scheduled to run. In simpler terms , when
an interrupt occurs , the system needs to save the current
context of the process currently running on the CPU so that it
can restore the context when its process is done ,
essentially suspending the process and then resume it.
Steps performed by operating
system during context switch
Steps performed by operating
system during context switch
In general, context switching can be broken down into
the following steps:
1. Saving the current context. This includes the current
program counter, the register values, and the state of the
device.
2. Loading the new context. This includes loading the
program counter, register values, and device state.
3. Resuming execution. This includes continuing the
execution of the new context . Context switching can also
include tasks such as updating the operating system's
internal data structures to reflect the change in context.
It may also include updating the scheduler to schedule
the new context appropriately.
2. Interrupt Handling: When an interrupt or exception occurs
(e.g., a hardware interrupt, system call, or timer interrupt),
the kernel takes control of the CPU. The kernel then
determines which process should run next and performs a
context switch to that process.

3. Process Scheduling: The kernel maintains a scheduler that


decides which process should be executed next based on
scheduling policies (e.g., round-robin, priority-based). It
selects the next process to run and initiates a context switch
to switch from the current process to the chosen one.

4. Saving and Restoring Process State: The kernel saves the


state of the current process, which includes its CPU
registers, program counter, and other relevant data. This
state is stored in a data structure known as the Process
Control Block (PCB). When switching to a new process, the
5. Memory Management: The kernel manages memory
allocation and deallocation for processes. It ensures that
each process has the necessary memory space to execute
and that memory protection mechanisms are in place to
prevent one process from interfering with another.

6. I/O Operations: The kernel handles I/O operations requested


by processes, such as reading from files, writing to devices,
or network communication. It queues and manages these
operations, ensuring they are executed efficiently.

7. Resource Management: The kernel manages various system


resources, including CPU time, memory, and I/O devices, to
ensure fair and efficient resource allocation among
processes.
8. Security and Isolation: The kernel enforces security and
isolation between processes. It prevents unauthorized
access to memory regions and system resources and
ensures that one process cannot interfere with or harm
another.

9. Error Handling: The kernel is responsible for detecting and


handling errors within processes. It can terminate or isolate
misbehaving processes to prevent them from disrupting the
entire system.

In summary, the kernel plays a crucial role in managing and


controlling processes in an operating system. It orchestrates
the context switches, ensuring that each process receives its
fair share of CPU time and system resources while maintaining
isolation and security. These actions are essential for
End of
Presentatio
n

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy