0% found this document useful (0 votes)
21 views52 pages

OS UNIT2 Sylaja

Uploaded by

HARI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views52 pages

OS UNIT2 Sylaja

Uploaded by

HARI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 52

CSEN1101

OPERATING
SYSTEMS(UNIT 2)
Dr.SYLAJA VALLEE NARAYAN S R
Assistant Professor/CSE
GITAM University
Bangalore
UNIT 2 Process Management & CPU Scheduling

Process Management: Process concepts, process scheduling,


Operations on processes,inter- process communication

CPU Scheduling: Multithreaded programming, Multi-core


Programming, Multi-threading Models, Scheduling-criteria,
scheduling algorithms, algorithm evaluation.
1.PROCESS CONCEPT
 Includes
1.1.Process
1.2.Process State
1.3.Process Control Block(PCB)
1.4.Thread

1.1.PROCESS
 Process is a program in execution .
 It contains Stack: contains temporary data(function parameter, return
addresses and local variables)
 Heap: is memory dynamically allocated during process run time.
 Data :contains local variables
 Text: is program code

Difference between program and process


Program is a passive entity, such as a file containing a list of instructions
stored on disk(executable file)
Process is an active entity, with a program counter specifying the next
instruction to execute and a set of associated resources.
1.2. Process State
 indicates current activity of the process. As process executes, it changes state.

Fig: Process State


Each process may be in one of the following states
1. New
2. Running
3. Waiting
4. Ready
5. Terminated
New : The process is being created
Running: Instructions are being executed
Waiting :The process is waiting for some event to occur(I/O completion)
Ready: Process is waiting to be assigned to a processor
Terminated: Process has finished execution

1.3. Process Control Block(PCB) or Task Control Block


 A Process Control Block in OS (PCB) is a data structure used by the operating system to manage
information about a process.

Fig: Process Control Block


PCB includes
1.Process State: may be new, ready, running, waiting etc.
2.Process ID: Every process is assigned a unique id known as process ID(Identifier) or PID
3.Program counter: contains the address of next instruction to be executed.
4.Registers:include accumulators, index registers, stack pointers, general purpose registers
5.Memory Information: contain value of base and limit registers, page table ,segment table
6.Accounting Information: contains time limits, account number, process number
7.I/O Status Information: includes list of I/O devices allocated to the process, list of open files
8.Process Scheduling information.
A context switching is a process that involves switching of the CPU from one process to another. The execution of
the process that is present in the running state is suspended by the kernel and another process that is present in
the ready state is executed. .and statby the CPU.
States 1
In execution
States are
saved in PCB
2
Fig: Context Switching

Executes, process 1 is idle 3


4

Executes, process 2 is idle


5
1.4. Threads
 A thread is a single sequential flow of execution of tasks of a process. It is a light
weight process.
 A process is divided into multiple threads. For example, in a browser, multiple tabs can
be different threads. MS Word uses multiple threads: one thread to format the text,
another thread to process inputs, etc.

Fig: Three threads of a process


2.PROCESS SCHEDULING

Includes
2.1.Scheduling Queues
2.2.Schedulers
2.3.Context Switch

2.1.Scheduling Queues: are


1.Job Queue
2.Ready Queue
3.Device Queue
 As processes enter the system they are put in the Job Queue.
 Processes that are residing in main memory and are ready and waiting to execute are kept in
Ready Queue. Ready Queue is stored as a linked list. A ready queue header contains
pointers to the first and final PCB in the list. Each PCB includes a pointer field that points to
the next PCB in the ready queue.

Fig: Ready Queue

 Processes waiting for a particular I/O device are put in Device Queue.
Process scheduling is represented in Queuing diagram
1.Rectangle box represents ready and device queue
2.Circle represent resources that serves the queue
3.Arrow indicate the flow of processes in the system

Fig: Queuing Diagram


2.2.Schedulers
 Scheduler selects processes from the queue in some fashion.
 3 types of Schedulers 1.Long term scheduler or job scheduler
2.Short term scheduler or CPU scheduler
3.Medium term scheduler
 Long term scheduler selects processes from disks(job pool) and loads them into
memory (ready queue)for execution. It is invoked in seconds, minutes. The
processes are either I/O bound or CPU bound. I/O-bound process – spends more
time doing I/O than computation. CPU-bound process – spends more time doing
computations
 Short term scheduler selects which process should be executed next and allocates
CPU. It is invoked in milliseconds
 Medium term scheduler is used for swapping.Swapping is a memory management
scheme in which any process can be temporarily swapped from main memory to
secondary memory so that the main memory can be made available for other
processes. It is used to improve main memory utilization.
2.3.Context Switch
Context Switching involves storing the context or state of a process, when interrupt
happens, so that it can be reloaded when required and execution can be resumed from the
same point as earlier. This is a feature of a multitasking operating system and allows a
single CPU to be shared by multiple processes.
3.OPERATIONS ON PROCESSES
includes
3.1.Process creation
3.2.Process Termination
3.1. Process Creation
 A process may be created by another process using fork().
 exec() system call replace the process’ memory space with a new program
Parent and child process:
 The initial process is referred to as the parent process.
 Process originating from a parent is known as a child process.
 A process is identifiable through its process ID (PID).
•forkThe fork system call creates a new process(i.e Child Process) by duplicating the existing
process(i.e Parent Process). fork system call returns the process ID of the child to the parent and 0 to
the child.

15
Parent process create children processes, which, in turn create other processes, forming a tree
of processes. E.g. is shown below
The Secure Shell Daemon
application (SSH daemon or sshd
provides encrypted communications
between two untrusted hosts over an
insecure network.
Bash is a command processor that
runs in a text window where the user
types commands that cause actions.
Bash can also read and execute
commands from a file, called a shell
script.
Vim is a text editor for Unix that
comes with Linux, BSD, and macOS.
Tcsh”tee-see-shell” is an enhanced
version of the csh. It includes some
additional utilities such as command
line editing and filename/command
completion.
Ps (process status) command allows
to view information about the
processes running on Linux system.
3.2.Process Termination
 Parent may terminate the execution of child process using the abort() system call.
Some reasons for doing so:
1. Child has exceeded allocated resources
2. Task assigned to child is no longer required
3. The parent is exiting and the operating systems does not allow a child to continue if
its parent terminates
 The parent process may wait for termination of a child process by using the wait()
system call. The call returns status information and the pid of the terminated process
pid = wait(&status);
 Zombie :a child process has terminated (via the exit system call) but exit status is
ignored by parent.
 Orphan :If parent terminated without invoking wait and the child process is orphan,
which is handled by operating system.
 cascading termination: All children, grandchildren, etc. are terminated.
4.Interprocess Communication(IPC)
 is used for exchanging information between numerous threads in one or more processes (or
programs)
 2 models of IPC
4.1.Shared Memory Systems
4.2.Message passing Systems
4.1.Shared Memory Systems
 has shared memory that can be simultaneously accessed(read and write ) by multiple processes.
 All POSIX systems, Windows operating systems use shared memory.
2 types of Process
1.Independent process
2.Cooperating process
 Independent process: does
not share data with any other
process
 Cooperating process: share
data with other processes. So
it requires IPC mechanism
Producer- consumer problem uses the concept of shared memory.
Examples of producer consumer problem are
 Compiler may produce assembly code, which is consumed by an assembler. Assembler in turn,
may produce object modules which are consumed by the loader.
 In Client- Server paradigm, Server can be producer and client can be consumer. E.g. web server
produces HTML files and images, which are consumed by the client web browser requesting the
resource.
 A buffer is in the shared memory where producer can produce one item while the consumer is
consuming another item. Communication in client server is through Sockets, Remote Procedure
2 types of buffers are used Calls, Pipes, Remote Method Invocation (Java)
1.Unbouded buffer
2.Bounded buffer
 Unbounded-buffer places no limit on the size of the buffer. Consumer may wait, producer never
waits.
 Bounded-buffer assumes that there is a fixed buffer size. Consumer waits for new item,
producer waits if buffer is full.
Shared buffer is implemented as a circular array with two logical pointer: in and out
 in points to the next free position in the buffer.
 out points to the first full position in the buffer.
 Buffer is empty when in ==out.
 Buffer is full when ((in+1)%BUFFER_SIZE)==out
The Bounded Buffer
# define BUFFER_SIZE 10
typedef struct
{
……
} item;

item buffer[BUFFER_SIZE] ;
int in = 0;
int out = 0;

In the code for producer and consumer


 Producer process has a local variable next_Produced , in which the new item to be produced is
stored.
 Consumer process has local variable next_Consumed, in which the item to be consumed is stored.
The Producer Process
item next_produced;
while (true)
{
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}
The Consumer Process
item next_consumed;
while (true)
{
while (in == out)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;

/* consume the item in next consumed */


}
4.2.Message passing Systems
Message passing model allows multiple processes to read and write data to the message
queue without being connected to each other. Messages are stored on the queue until their
recipient retrieves them.

In the above diagram, both the processes P1 and P2 can access the message queue and
store and retrieve data.
Message Passing provides a mechanism to allow processes to communicate and to synchronize
their actions without sharing the same address space.
For example − chat programs on World Wide Web.
Message passing provides two operations
1.Send message
2.Receive message
Messages sent by a process can be either fixed or variable size.
Example: If process P and Q want to communicate they need to send a message to and receive
a message from each other through a communication link.
Communication link can be
1.Physical Implementation : shared memory, hardware bus, network
2. Logical Implementation :
includes
2.1.Naming(Direct or indirect communication)
2.2.Synchronization(Synchronous or asynchronous communication)
2.3.Buffering(Automatic or explicit buffering)
2.1.Direct or indirect communication(Naming)
In Direct communication , 2 addressing are there
i. Symmetry addressing: each process that wants to communicate must explicitly
name the sender and receiver(recipient)
 send() and receive() primitives are defined as
send(P, message) :Send a message to process P
receive(Q, message) :Receive a message from process Q
ii. Asymmetry addressing: only sender names receiver, receiver is not required to
name the sender
 send() and receive() primitives are defined as
send( P, message) : Send a message to process P
receive(id, message) : receive a message from any process. Variable id is set to the name of
the process with which communication has taken place.
Communication link has following properties
 Links are established automatically
 Between each pair there exists exactly one link
 The link may be unidirectional, but is usually bi-directional
In Indirect Communication
 the messages are sent to or received from mailboxes or ports. Mailbox is viewed as an
object in to which messages can be placed by processes and from which messages can be
removed. Each mail box has an unique identification. Example POSIX message queues use
an integer value to identify a mailbox.
 Send() and receive() primitives are defined as follows
send( A, message) : send a message to mailbox A
receive(A, message) : receive a message from mailbox A
Communication Link has following properties
 Link established only if processes share a common mailbox
 A link may be associated with many processes
 Each pair of processes may share several communication links
 Link may be unidirectional or bi-directional
2.2.Synchronous or asynchronous communication(Synchronization)
Message passing can be
1.Blocking (synchronous)
2.Non-blocking(asynchronous)
 Blocking send: Sending process is blocked until the message is received by
the receiving process or mail box.
 Blocking receive: the receiver is blocked until a message is available
 Non- blocking send: the sender sends the message and continue
 Non- blocking receive: Receiver receives either a valid message or a null.
2.3.Buffering
 Messages exchanged by communicating processes reside in a temporary
queue.
 Queues are implemented in 3 ways.
1. Zero capacity (system with no buffering)
2. Bounded capacity (system with automatic buffering)
3. Unbounded capacity (system with automatic buffering)
 In Zero capacity: no messages are queued on a link. Sender must wait for
receiver (rendezvous)
 In Bounded capacity: finite length of n messages can be in the queue.
Sender must wait if link full.
 In Unbounded capacity: infinite length of messages can be in the queue.
Sender never waits
Examples of IPC Systems - POSIX
POSIX Shared Memory
 Process first creates shared memory segment
shm_fd = shm_open(name, O _ CREAT | O _RDWR,
0666);
 Also used to open an existing segment to share it
 Set the size of the object
ftruncate(shm fd, 4096);
 Now the process could write to the shared memory
sprintf(shared memory, "Writing to shared
memory");
POSIX (Portable Operating System Interface) is a set of standard
operating system interfaces based on the Unix operating system.
Message passing in Windows via Local Procedure Calls
5.Multithreaded programming
 Multithreading allows the application to divide its task into individual
threads.
 In multi-threads, the same process or task can be done by the number of
threads.
 Kernels are generally multithreaded

 Example of a multithreaded program is a word processor. While you are


typing, multiple threads are used to display your document, asynchronously
check the spelling and grammar of your document, generate a PDF version
of the document.
 Internet browser, where multiple threads load content, display animations,
play a video, and so much more at the same time on multiple tabs.
Multithreaded Server Architecture
Types of thread
1.User level thread
2.Kernel level thread
• User threads - management done by user-level threads library
• Three primary thread libraries:
• POSIX Pthreads
• Windows threads
• Java threads
• Kernel threads - Supported by the Kernel
• Examples – virtually all general purpose operating systems, including:
• Windows
• Solaris
• Linux
• Tru64 UNIX
• Mac OS X
6.Multicore Programming
 A multicore contains multiple cores in a single CPU.
 A multiprocessor is made up of several CPUs ie has multiple processors on the
motherboard or chip.

Fig: Multicore
Concurrency vs. Parallelism
 Concurrent execution(serial) on single-core
system:

 Parallelism on a multi-core system:


Single and Multithreaded Processes
Amdahl’s Law
• Identifies performance gains from adding additional cores to an application
that has both serial and parallel components
• S is serial portion
• N processing cores

• That is, if application is 75% parallel / 25% serial, moving from 1 to 2 cores
results in speedup of 1.6 times
• As N approaches infinity, speedup approaches 1 / S
Features Multiprocessors Multicore
Definition It is a system with multiple CPUs that A multicore processor is a single
allows processing programs processor that contains multiple
simultaneously. independent processing units known as
cores that may read and execute program
instructions.

Execution Multiprocessors run multiple programs The multicore executes a single program
faster than a multicore system. faster.

Reliability It is more reliable than the multicore It is not much reliable than the
system. If one of any processors fails in multiprocessors.
the system, the other processors will not
be affected.

Traffic It has high traffic than the multicore It has less traffic than the multiprocessors.
system.
Cost It is more expensive as compared to a These are cheaper than the
multicore system. multiprocessors system.

Configuration It requires complex configuration. It doesn't need to be configured.


7.Multithreading Models
3 types
7.1. One- to -One Model
7.2.Many-to-one Model
7.3.Many-to-many Model
7.4.Two level Model
7.1. One- to -One Model
 Each user-level thread maps to each kernel thread.
 Creating a user-level thread creates a kernel thread.
Pros:
 More concurrency than many-to-one.
 Allows multiple threads to run in parallel on multiprocessor
 Number of threads per process sometimes restricted due to overhead(cons)
Examples
 Windows
 Linux
 Solaris 9 and later
Fig: One-to-one model

7.2.Many-to-one Model
 Many user-level threads mapped to single kernel thread.
Disadv:
 One thread blocking causes all to block.
Adv:
 allow the application to create any number of threads that can execute concurrently.
 In a many-to-one (user-level threads) implementation, all threads activity is restricted to
user space.
Examples:
 Solaris Green Threads
 GNU Portable Threads

Fig: Many-to-one model


7.3.Many-to-many Model
 Allows many user level threads to be mapped to many kernel threads.
 Allows the operating system to create a sufficient number of kernel threads
Examples
 Solaris prior to version 9
 Windows with the Thread Fiber package

Fig: Many-to-many model


7.4.Two-level Model
 Similar to M:M, except that it allows a user thread to be bound to kernel thread
Examples
 IRIX
 HP-UX
 Tru64 UNIX
 Solaris 8 and earlier
8.CPU Scheduling Criteria
 CPU scheduling is the basis of multiprogrammed operating systems.
 The objective of multiprogramming is to have some process running at all times, in order to maximize CPU
utilization.
The aim of the scheduling algorithm is to maximize and minimize the following:
Maximize:
 CPU utilization : is defined as the percentage of the time a CPU spends running tasks.
 Throughput : number of processes that are completed per unit time.
Minimize:
 Waiting time: WT is sum of periods spent waiting in the ready queue, before it reaches the
CPU. The difference between turnaround time and burst time is called the waiting time of a
process.
WT=TAT-BT
 Turnaround time :TAT is interval from time of submission of a process to the time of
completion.
 Response time: is the time it takes to start responding, not the time to output the
response. It is the duration between the arrival of a process and the first time it runs.
 Burst Time (execution time): BT is the amount of CPU time the process requires to
complete its execution.
CPU-I/O Burst Cycle:
 Process execution begins with a CPU burst.
 That is followed by an I/O burst, then another CPU burst, then another I/O burst, and so on.
 Eventually, the last CPU burst will end with a system request to terminate execution, rather
than with another I/O burst.
9.Scheduling algorithms

are
1.Pre-emptive
2.Non-preemptive

 Preemptive scheduling allows a running


process to be removed from CPU before
completion, when a high priority process
comes.
 In non-preemptive scheduling, any new
process has to wait until the running process
finishes its CPU cycle.
 Examples of Pre-emptive:
Round Robin, SRJF
 Examples of Non-Preemptive:
FCFS, Shortest Job First
TYPES
1.FCFS (First Come First Serve) SCHEDULING
NON-PREEMPTIVE

2. SJF(Shortest Job First) SCHEDULING or SJN(Shortest Job Next)


NON-PEREMPTIVE

3.SRTF(Shortest Remaining Time First) SCHEDULING or SRTN(Shortest Remaining Time


Next)
PRE-EMPTIVE

4.ROUNDROBIN SCHEDULING
PRE-EMPTIVE

5.PRIORITY SCHEDULING
NON-PREEMPTIVE
PRE-EMPTIVE
Arrival time (AT) : Arrival time is the time at which the process arrives in ready queue.
Burst time (BT) or CPU time of the process : Burst time is the unit of time in which a particular process
completes its execution.
Completion time (CT) :Completion time is the time at which the process has been terminated.
Turn-around time (TAT) : The total time from arrival time to completion time is known as turn-around time.
TAT can be written as,
Turn-around time (TAT) = Completion time (CT) – Arrival time (AT) or, TAT = Burst time (BT) + Waiting time
(WT)
Waiting time (WT) : Waiting time is the time at which the process waits for its allocation while the previous
process is in the CPU for execution. WT is written as,
Waiting time (WT) = Turn-around time (TAT) – Burst time (BT)
Response time (RT) : Response time is the time at which CPU has been allocated to a particular process first
time.
In case of non-preemptive scheduling, generally Waiting time and Response time is same.
Gantt chart :
A Gantt chart is a horizontal bar chart used to represent operating systems CPU scheduling in graphical view
that help to plan, coordinate and track specific CPU utilization factor like throughput, waiting time,
turnaround time etc.
First-Come First-Serve Scheduling:
 Process that arrives first is given the CPU first , that is less Arrival Time first
 is a non-preemptive Scheduling .
 FIFO (First In First Out) principle is used.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy