0% found this document useful (0 votes)
37 views39 pages

Operating System# Material Unit 2

Uploaded by

rounakhara25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views39 pages

Operating System# Material Unit 2

Uploaded by

rounakhara25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

Operating systems Unit Two

Process Concept: Process scheduling, Operations on processes, Inter-process communication,


Communication in client server systems.

Multithreaded Programming: Multithreading models, Thread libraries, Threading issues. Process


Scheduling: Basic concepts, Scheduling criteria, Scheduling algorithms, Multiple processor
scheduling, Thread scheduling.

Inter-process Communication: Race conditions, Critical Regions, Mutual exclusion with busy
waiting, Sleep and wakeup, Semaphores, Mutexes, Monitors, Message passing, Barriers,
Classical IPC Problems – Dining philosophers problem, Readers and writers problem.

 Process scheduling
Definition

The process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis of a
particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable memory
at a time and the loaded process shares the CPU using time multiplexing.

Categories of Scheduling

There are two categories of scheduling:

1. Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready
state or from waiting state to ready state. This switching occurs as the CPU may
give priority to other processes and replace the process with higher priority with
the running process.

Process Scheduling Queues

The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS
maintains a separate queue for each of the process states and PCBs of all processes in the
same execution state are placed in the same queue. When the state of a process is
changed, its PCB is unlinked from its current queue and moved to its new state queue.
The Operating System maintains the following important process scheduling queues −
 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.).
The OS scheduler determines how to move processes between the ready and run queues
which can only have one entry per processor core on the system; in the above diagram, it
has been merged with the CPU.

Two-State Process Model

Two-state process model refers to running and non-running states which are described
below −

S.N. State & Description

1
Running
When a new process is created, it enters into the system as in the running state.

2 Not Running
Processes that are not running are kept in queue, waiting for their turn to
execute. Each entry in the queue is a pointer to a particular process. Queue is
implemented by using linked list. Use of dispatcher is as follows. When a
process is interrupted, that process is transferred in the waiting queue. If the
process has completed or aborted, the process is discarded. In either case, the
dispatcher then selects a process from the queue to execute.

Schedulers:Schedulers are special system software which handle process scheduling in


various ways. Their main task is to select the jobs to be submitted into the system and to
decide which process to run. Schedulers are of three types −
 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler

It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them
into memory for execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the
degree of multiprogramming is stable, then the average rate of process creation must be
equal to the average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from
new to ready, then there is use of long-term scheduler.

Short Term Scheduler

It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state
of the process. CPU scheduler selects a process among the processes that are ready to
execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler

Medium-term scheduling is a part of swapping. It removes the processes from the


memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-
charge of handling the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended
processes cannot make any progress towards completion. In this condition, to remove the
process from memory and make space for other processes, the suspended process is
moved to the secondary storage. This process is called swapping, and the process is said to
be swapped out or rolled out. Swapping may be necessary to improve the process mix.

Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than Speed is fastest among Speed is in between both


short term scheduler other two short and long term
scheduler.

3 It controls the degree of It provides lesser It reduces the degree of


multiprogramming control over degree of multiprogramming.
multiprogramming

4 It is almost absent or It is also minimal in It is a part of Time sharing


minimal in time sharing time sharing system systems.
system

5 It selects processes from It selects those It can re-introduce the


pool and loads them into processes which are process into memory and
memory for execution ready to execute execution can be
continued.

Context Switching

A context switching is the mechanism to store and restore the state or context of a CPU in
Process Control block so that a process execution can be resumed from the same point at
a later time. Using this technique, a context switcher enables multiple processes to share
a single CPU. Context switching is an essential part of a multitasking operating system
features.
When the scheduler switches the CPU from executing one process to execute another, the
state from the current running process is stored into the process control block. After this,
the state for the process to run next is loaded from its own PCB and used to set the PC,
registers, etc. At that point, the second process can start executing.
Context switches are computationally intensive since register and memory state must be
saved and restored. To avoid the amount of context switching time, some hardware
systems employ two or more sets of processor registers. When the process is switched,
the following information is stored for later use.

 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information

 Operations on processes
There are many operations that can be performed on processes. Some of these are
process creation, process preemption, process blocking, and process termination. These
are given in detail as follows −

Process Creation

Processes need to be created in the system for different operations. This can be done by
the following events −

 User request for process creation


 System initialization
 Execution of a process creation system call by a running process
 Batch job initialization
A process may be created by another process using fork(). The creating process is called
the parent process and the created process is the child process. A child process can have
only one parent but a parent process may have many children. Both the parent and child
processes have the same memory image, open files, and environment strings. However,
they have distinct address spaces.
A diagram that demonstrates process creation using fork() is as follows −
Process Preemption

An interrupt mechanism is used in preemption that suspends the process executing


currently and the next process to execute is determined by the short-term scheduler.
Preemption makes sure that all processes get some CPU time for execution .
A diagram that demonstrates process preemption is as follows

Process Blocking

The process is blocked if it is waiting for some event to occur. This event may be I/O as
the I/O events are executed in the main memory and don't require the processor. After
the event is complete, the process again goes to the ready state.
A diagram that demonstrates process blocking is as follows −

Process Termination

After the process has completed the execution of its last instruction, it is terminated. The
resources held by a process are released after it is terminated.
A child process can be terminated by its parent process if its task is no longer relevant.
The child process sends its status information to the parent process before it terminates.
Also, when a parent process is terminated, its child processes are terminated as well as
the child processes cannot run if the parent processes are terminated.
 Inter-process communication
Interprocess communication is the mechanism provided by the operating system that
allows processes to communicate with each other. This communication could involve a
process letting another process know that some event has occurred or the transferring of
data from one process to another.
A diagram that illustrates interprocess communication is as follows −

Synchronization in Interprocess Communication

Synchronization is a necessary part of interprocess communication. It is either provided by


the interprocess control mechanism or handled by the communicating processes. Some of
the methods to provide synchronization are as follows −

 Semaphore
A semaphore is a variable that controls the access to a common resource by multiple
processes. The two types of semaphores are binary semaphores and counting
semaphores.
 Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section
at a time. This is useful for synchronization and also prevents race conditions.
 Barrier
A barrier does not allow individual processes to proceed until all the processes reach
it. Many parallel languages and collective routines impose barriers.
 Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a loop while
checking if the lock is available or not. This is known as busy waiting because the
process is not doing any useful operation even though it is active.

Approaches to Interprocess Communication

The different approaches to implement interprocess communication are given as follows −

 Pipe
A pipe is a data channel that is unidirectional. Two pipes can be used to create a two-
way data channel between two processes. This uses standard input and output
methods. Pipes are used in all POSIX systems as well as Windows operating systems.
 Socket
The socket is the endpoint for sending or receiving data in a network. This is true for
data sent between processes on the same computer or data sent between different
computers on the same network. Most of the operating systems use sockets for
interprocess communication.
 File
A file is a data record that may be stored on a disk or acquired on demand by a file
server. Multiple processes can access a file as required. All operating systems use
files for data storage.
 Signal
Signals are useful in interprocess communication in a limited way. They are system
messages that are sent from one process to another. Normally, signals are not used
to transfer data but are used for remote commands between processes.
 Shared Memory
Shared memory is the memory that can be simultaneously accessed by multiple
processes. This is done so that the processes can communicate with each other. All
POSIX systems, as well as Windows operating systems use shared memory.
 Message Queue
Multiple processes can read and write data to the message queue without being
connected to each other. Messages are stored in the queue until their recipient
retrieves them. Message queues are quite useful for interprocess communication
and are used by most operating systems .
A diagram that demonstrates message queue and shared memory methods of
interprocess communication is as follows −

 Communication in client server


systems
Client/Server communication involves two components, namely a client and a server.
They are usually multiple clients in communication with a single server. The clients send
requests to the server and the server responds to the client requests.
There are three main methods to client/server communication. These are given as follows

Sockets

Sockets facilitate communication between two processes on the same machine or


different machines. They are used in a client/server framework and consist of the IP
address and port number. Many application protocols use sockets for data connection
and data transfer between a client and a server.
Socket communication is quite low-level as sockets only transfer an unstructured byte
stream across processes. The structure on the byte stream is imposed by the client and
server applications.
A diagram that illustrates sockets is as follows −

Remote Procedure Calls

These are interprocess communication techniques that are used for client-server based
applications. A remote procedure call is also known as a subroutine call or a function call.
A client has a request that the RPC translates and sends to the server. This request may
be a procedure or a function call to a remote server. When the server receives the
request, it sends the required response back to the client.
A diagram that illustrates remote procedure calls is given as follows −
Pipes

These are interprocess communication methods that contain two end points. Data is
entered from one end of the pipe by a process and consumed from the other end by the
other process.
The two different types of pipes are ordinary pipes and named pipes. Ordinary pipes only
allow one way communication. For two way communication, two pipes are required.
Ordinary pipes have a parent child relationship between the processes as the pipes can
only be accessed by processes that created or inherited them.
Named pipes are more powerful than ordinary pipes and allow two way communication.
These pipes exist even after the processes using them have terminated. They need to be
explicitly deleted when not required anymore.
A diagram that demonstrates pipes are given as follows

 Multithreading models
Multithreading allows the execution of multiple parts of a program at the same time.
These parts are known as threads and are lightweight processes available within the
process. Therefore, multithreading leads to maximum utilization of the CPU by
multitasking.
The main models for multithreading are one to one model, many to one model and many
to many model. Details about these are given as follows −

One to One Model

The one to one model maps each of the user threads to a kernel thread. This means that
many threads can run in parallel on multiprocessors and other threads can run when one
thread makes a blocking system call.
A disadvantage of the one to one model is that the creation of a user thread requires a
corresponding kernel thread. Since a lot of kernel threads burden the system, there is
restriction on the number of threads in the system.
A diagram that demonstrates the one to one model is given as follows −
Many to One Model

The many to one model maps many of the user threads to a single kernel thread. This
model is quite efficient as the user space manages the thread management.
A disadvantage of the many to one model is that a thread blocking system call blocks the
entire process. Also, multiple threads cannot run in parallel as only one thread can access
the kernel at a time.
A diagram that demonstrates the many to one model is given as follows −

Many to Many Model

The many to many model maps many of the user threads to a equal number or lesser
kernel threads. The number of kernel threads depends on the application or machine.
The many to many does not have the disadvantages of the one to one model or the many
to one model. There can be as many user threads as required and their corresponding
kernel threads can run in parallel on a multiprocessor.
A diagram that demonstrates the many to many model is given as follows −
 Thread libraries
A thread is a lightweight of process and is a basic unit of CPU utilization which consists of
a program counter, a stack, and a set of registers.
Given below is the structure of thread in a process −

A process has a single thread of control where one program can counter and one
sequence of instructions is carried out at any given time. Dividing an application or a
program into multiple sequential threads that run in quasi-parallel, the programming
model becomes simpler.
Thread has the ability to share an address space and all of its data among themselves.
This ability is essential for some specific applications.
Threads are lighter weight than processes, but they are faster to create and destroy than
processes.

Thread Library

A thread library provides the programmer with an Application program interface for
creating and managing thread.
Ways of implementing thread library
There are two primary ways of implementing thread library, which are as follows −
 The first approach is to provide a library entirely in user space with kernel support.
All code and data structures for the library exist in a local function call in user space
and not in a system call.
 The second approach is to implement a kernel level library supported directly by
the operating system. In this case the code and data structures for the library exist
in kernel space.
Invoking a function in the application program interface for the library typically results in
a system call to the kernel.
The main thread libraries which are used are given below −
 POSIX threads − Pthreads, the threads extension of the POSIX standard, may be
provided as either a user level or a kernel level library.
 WIN 32 thread − The windows thread library is a kernel level library available on
windows systems.
 JAVA thread − The JAVA thread API allows threads to be created and managed
directly as JAVA programs.

 Threading issues
We can discuss some of the issues to consider in designing multithreaded programs.
These issued are as follows −
The fork() and exec() system calls
The fork() is used to create a duplicate process. The meaning of the fork() and exec()
system calls change in a multithreaded program.
If one thread in a program which calls fork(), does the new process duplicate all threads,
or is the new process single-threaded? If we take, some UNIX systems have chosen to
have two versions of fork(), one that duplicates all threads and another that duplicates
only the thread that invoked the fork() system call.
If a thread calls the exec() system call, the program specified in the parameter to exec()
will replace the entire process which includes all threads.
Signal Handling
Generally, signal is used in UNIX systems to notify a process that a particular event has
occurred. A signal received either synchronously or asynchronously, based on the source
of and the reason for the event being signalled.
All signals, whether synchronous or asynchronous, follow the same pattern as given
below −
 A signal is generated by the occurrence of a particular event.
 The signal is delivered to a process.
 Once delivered, the signal must be handled.
Cancellation
Thread cancellation is the task of terminating a thread before it has completed.
For example − If multiple database threads are concurrently searching through a database
and one thread returns the result the remaining threads might be cancelled.
A target thread is a thread that is to be cancelled, cancellation of target thread may occur
in two different scenarios −
 Asynchronous cancellation − One thread immediately terminates the target thread.
 Deferred cancellation − The target thread periodically checks whether it should
terminate, allowing it an opportunity to terminate itself in an ordinary fashion.
Thread polls
Multithreading in a web server, whenever the server receives a request it creates a
separate thread to service the request.
Some of the problems that arise in creating a thread are as follows −
 The amount of time required to create the thread prior to serving the request
together with the fact that this thread will be discarded once it has completed its
work.
 If all concurrent requests are allowed to be serviced in a new thread, there is no
bound on the number of threads concurrently active in the system.
 Unlimited thread could exhaust system resources like CPU time or memory.
A thread pool is to create a number of threads at process start-up and place them into a
pool, where they sit and wait for work.

 Process scheduling
Definition

The process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis of a
particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable memory
at a time and the loaded process shares the CPU using time multiplexing.

Categories of Scheduling

There are two categories of scheduling:

3. Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
4. Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready
state or from waiting state to ready state. This switching occurs as the CPU may
give priority to other processes and replace the process with higher priority with
the running process.
Process Scheduling Queues

The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS
maintains a separate queue for each of the process states and PCBs of all processes in the
same execution state are placed in the same queue. When the state of a process is
changed, its PCB is unlinked from its current queue and moved to its new state queue.
The Operating System maintains the following important process scheduling queues −
 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.).
The OS scheduler determines how to move processes between the ready and run queues
which can only have one entry per processor core on the system; in the above diagram, it
has been merged with the CPU.

Two-State Process Model

Two-state process model refers to running and non-running states which are described
below −

S.N. State & Description

1 Running
When a new process is created, it enters into the system as in the running state.

2 Not Running
Processes that are not running are kept in queue, waiting for their turn to
execute. Each entry in the queue is a pointer to a particular process. Queue is
implemented by using linked list. Use of dispatcher is as follows. When a
process is interrupted, that process is transferred in the waiting queue. If the
process has completed or aborted, the process is discarded. In either case, the
dispatcher then selects a process from the queue to execute.

Schedulers:Schedulers are special system software which handle process scheduling in


various ways. Their main task is to select the jobs to be submitted into the system and to
decide which process to run. Schedulers are of three types −

 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler

It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them
into memory for execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the
degree of multiprogramming is stable, then the average rate of process creation must be
equal to the average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from
new to ready, then there is use of long-term scheduler.

Short Term Scheduler

It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state
of the process. CPU scheduler selects a process among the processes that are ready to
execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler

Medium-term scheduling is a part of swapping. It removes the processes from the


memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-
charge of handling the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended
processes cannot make any progress towards completion. In this condition, to remove the
process from memory and make space for other processes, the suspended process is
moved to the secondary storage. This process is called swapping, and the process is said to
be swapped out or rolled out. Swapping may be necessary to improve the process mix.
Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than Speed is fastest among Speed is in between both


short term scheduler other two short and long term
scheduler.

3 It controls the degree of It provides lesser It reduces the degree of


multiprogramming control over degree of multiprogramming.
multiprogramming

4 It is almost absent or It is also minimal in It is a part of Time sharing


minimal in time sharing time sharing system systems.
system

5 It selects processes from It selects those It can re-introduce the


pool and loads them into processes which are process into memory and
memory for execution ready to execute execution can be
continued.

Context Switching

A context switching is the mechanism to store and restore the state or context of a CPU in
Process Control block so that a process execution can be resumed from the same point at
a later time. Using this technique, a context switcher enables multiple processes to share
a single CPU. Context switching is an essential part of a multitasking operating system
features.
When the scheduler switches the CPU from executing one process to execute another, the
state from the current running process is stored into the process control block. After this,
the state for the process to run next is loaded from its own PCB and used to set the PC,
registers, etc. At that point, the second process can start executing.
Context switches are computationally intensive since register and memory state must be
saved and restored. To avoid the amount of context switching time, some hardware
systems employ two or more sets of processor registers. When the process is switched,
the following information is stored for later use.

 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information

 Scheduling criteria
Different CPU Scheduling Algorithms have different properties and the choice of a
particular algorithm depends on the various factors. Many criteria have been suggested
for comparing CPU scheduling algorithms.
The criteria include the following:
1. CPU utilisation: The main objective of any CPU scheduling algorithm is to keep the
CPU as busy as possible. Theoretically, CPU utilisation can range from 0 to 100 but in
a real-time system, it varies from 40 to 90 percent depending on the load upon the
system.
2. Throughput:A measure of the work done by the CPU is the number of processes
being executed and completed per unit of time. This is called throughput. The
throughput may vary depending upon the length or duration of the processes.

3. Turnaround time: For a particular process, an important criterion is how long it takes
to execute that process. The time elapsed from the time of submission of a process
to the time of completion is known as the turnaround time. Turn-around time is the
sum of times spent waiting to get into memory, waiting in the ready queue,
executing in CPU, and waiting for I/O. The formula to calculate Turn Around Time =
Compilation Time – Arrival Time.
4. Waiting time: A scheduling algorithm does not affect the time required to complete
the process once it starts execution. It only affects the waiting time of a process i.e.
time spent by a process waiting in the ready queue. The formula for calculating
Waiting Time = Turnaround Time – Burst Time.
5. Response time: In an interactive system, turn-around time is not the best criteria. A
process may produce some output fairly early and continue computing new results
while previous results are being output to the user. Thus another criteria is the time
taken from submission of the process of request until the first response is produced.
This measure is called response time. The formula to calculate Response Time = CPU
Allocation Time(when the CPU was allocated for the first) – Arrival Time
6. Completion time: This is the time when the process completes its execution.

 Scheduling algorithms
A Process Scheduler schedules different processes to be assigned to the CPU based on
particular scheduling algorithms. There are six popular process scheduling algorithms
which we are going to discuss in this chapter −

 First-Come, First-Served (FCFS) Scheduling


 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Shortest Remaining Time
 Round Robin(RR) Scheduling
 Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms
are designed so that once a process enters the running state, it cannot be preempted until
it completes its allotted time, whereas the preemptive scheduling is based on priority
where a scheduler may preempt a low priority running process anytime when a high
priority process enters into a ready state.

First Come First Serve (FCFS)

 Jobs are executed on first come, first serve basis.


 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 0-0=0

P1 5-1=4

P2 8-2=6

P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75

Shortest Job Next (SJN)

 This is also known as shortest job first, or SJF


 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not
known.
 The processer should know in advance how much time process will take.
Given: Table of processes, and their Arrival time, Execution time

Process Arrival Time Execution Time Service Time

P0 0 5 0
P1 1 3 5

P2 2 8 14

P3 3 6 8

Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25

Priority Based Scheduling

 Priority scheduling is a non-preemptive algorithm and one of the most common


scheduling algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be executed
first and so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements or any
other resource requirement.
Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are
considering 1 is the lowest priority.
Process Arrival Time Execution Time Priority Service Time

P0 0 5 1 0

P1 1 3 2 11

P2 2 8 1 14

P3 3 6 3 5

Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 11 - 1 = 10

P2 14 - 2 = 12

P3 5-3=2

Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6

Shortest Remaining Time

 Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
 The processor is allocated to the job closest to completion but it can be preempted
by a newer ready job with shorter time to completion.
 Impossible to implement in interactive systems where required CPU time is not
known.
 It is often used in batch environments where short jobs need to give preference.
Round Robin Scheduling

 Round Robin is the preemptive process scheduling algorithm.


 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.
 Context switching is used to save states of preempted processes.

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 (0 - 0) + (12 - 3) = 9

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

P3 (9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5

Multiple-Level Queues Scheduling

Multiple-level queues are not an independent scheduling algorithm. They make use of
other existing algorithms to group and schedule jobs with common characteristics.

 Multiple queues are maintained for processes with common characteristics.


 Each queue can have its own scheduling algorithms.
 Priorities are assigned to each queue.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in
another queue. The Process Scheduler then alternately selects jobs from each queue and
assigns them to the CPU based on the algorithm assigned to the queue.

 Multiple processor scheduling


Multiple processor scheduling or multiprocessor scheduling focuses on designing the
system's scheduling function, which consists of more than one processor. Multiple CPUs
share the load (load sharing) in multiprocessor scheduling so that various processes run
simultaneously. In general, multiprocessor scheduling is complex as compared to single
processor scheduling. In the multiprocessor scheduling, there are many processors, and
they are identical, and we can run any process at any time.

The multiple CPUs in the system are in close communication, which shares a common bus,
memory, and other peripheral devices. So we can say that the system is tightly coupled.
These systems are used when we want to process a bulk amount of data, and these
systems are mainly used in satellite, weather forecasting, etc.

There are cases when the processors are identical, i.e., homogenous, in terms of their
functionality in multiple-processor scheduling. We can use any processor available to run
any process in the queue.

Multiprocessor systems may be heterogeneous (different kinds of CPUs)


or homogenous (the same CPU). There may be special scheduling constraints, such as
devices connected via a private bus to only one

CPU.

There is no policy or rule which can be declared as the best scheduling solution to a
system with a single processor. Similarly, there is no best scheduling solution for a system
with multiple processors as well.

Approaches to Multiple Processor Scheduling

There are two approaches to multiple processor scheduling in the operating system:
Symmetric Multiprocessing and Asymmetric Multiprocessing.

1. Symmetric Multiprocessing: It is used where each processor is self-scheduling. All


processes may be in a common ready queue, or each processor may have its
private queue for ready processes. The scheduling proceeds further by having the
scheduler for each processor examine the ready queue and select a process to
execute.
2. Asymmetric Multiprocessing: It is used when all the scheduling decisions and I/O
processing are handled by a single processor called the Master Server. The other
processors execute only the user code. This is simple and reduces the need for data
sharing, and this entire scenario is called Asymmetric Multiprocessing.

Processor Affinity

Processor Affinity means a process has an affinity for the processor on which it is currently
running. When a process runs on a specific processor, there are certain effects on the
cache memory. The data most recently accessed by the process populate the cache for the
processor. As a result, successive memory access by the process is often satisfied in the
cache memory.

Now, suppose the process migrates to another processor. In that case, the contents of the
cache memory must be invalidated for the first processor, and the cache for the second
processor must be repopulated. Because of the high cost of invalidating and repopulating
caches, most SMP(symmetric multiprocessing) systems try to avoid migrating processes
from one processor to another and keep a process running on the same processor. This is
known as processor affinity. There are two types of processor affinity, such as:

1. Soft Affinity: When an operating system has a policy of keeping a process running
on the same processor but not guaranteeing it will do so, this situation is called
soft affinity.
2. Hard Affinity: Hard Affinity allows a process to specify a subset of processors on
which it may run. Some Linux systems implement soft affinity and provide system
calls like sched_setaffinity() that also support hard affinity.

Load Balancing

Load Balancing is the phenomenon that keeps the workload evenly distributed across all
processors in an SMP system. Load balancing is necessary only on systems where each
processor has its own private queue of a process that is eligible to execute.

Load balancing is unnecessary because it immediately extracts a runnable process from


the common run queue once a processor becomes idle. On SMP (symmetric
multiprocessing), it is important to keep the workload balanced among all processors to
utilize the benefits of having more than one processor fully. One or more processors will
sit idle while other processors have high workloads along with lists of processors awaiting
the CPU. There are two general approaches to load balancing:

1. Push Migration: In push migration, a task routinely checks the load on each
processor. If it finds an imbalance, it evenly distributes the load on each processor
by moving the processes from overloaded to idle or less busy processors.
2. Pull Migration:Pull Migration occurs when an idle processor pulls a waiting task
from a busy processor for its execution.

Multi-core Processors

In multi-core processors, multiple processor cores are placed on the same physical chip.
Each core has a register set to maintain its architectural state and thus appears to the
operating system as a separate physical processor. SMP systems that use multi-core
processors are faster and consume less power than systems in which each processor has
its own physical chip.

However, multi-core processors may complicate the scheduling problems. When the
processor accesses memory, it spends a significant amount of time waiting for the data to
become available. This situation is called a Memory stall. It occurs for various reasons,
such as cache miss, which is accessing the data that is not in the cache memory.

In such cases, the processor can spend upto 50% of its time waiting for data to become
available from memory. To solve this problem, recent hardware designs have
implemented multithreaded processor cores in which two or more hardware threads are
assigned to each core. Therefore if one thread stalls while waiting for the memory, the
core can switch to another thread. There are two ways to multithread a processor:

1. Coarse-Grained Multithreading: A thread executes on a processor until a long


latency event such as a memory stall occurs in coarse-grained multithreading.
Because of the delay caused by the long latency event, the processor must switch
to another thread to begin execution. The cost of switching between threads is
high as the instruction pipeline must be terminated before the other thread can
begin execution on the processor core. Once this new thread begins execution, it
begins filling the pipeline with its instructions.
2. Fine-Grained Multithreading: This multithreading switches between threads at a
much finer level, mainly at the boundary of an instruction cycle. The architectural
design of fine-grained systems includes logic for thread switching, and as a result,
the cost of switching between threads is small.

Symmetric Multiprocessor

Symmetric Multiprocessors (SMP) is the third model. There is one copy of the OS in
memory in this model, but any central processing unit can run it. Now, when a system call
is made, the central processing unit on which the system call was made traps the kernel
and processed that system call. This model balances processes and memory dynamically.
This approach uses Symmetric Multiprocessing, where each processor is self-scheduling.

The scheduling proceeds further by having the scheduler for each processor examine the
ready queue and select a process to execute. In this system, this is possible that all the
process may be in a common ready queue or each processor may have its private queue
for the ready process. There are mainly three sources of contention that can be found in a
multiprocessor operating system.

o Locking system: As we know that the resources are shared in the multiprocessor
system, there is a need to protect these resources for safe access among the
multiple processors. The main purpose of the locking scheme is to serialize access
of the resources by the multiple processors.
o Shared data: When the multiple processors access the same data at the same time,
then there may be a chance of inconsistency of data, so to protect this, we have to
use some protocols or locking schemes.
o Cache coherence: It is the shared resource data that is stored in multiple local
caches. Suppose two clients have a cached copy of memory and one client change
the memory block. The other client could be left with an invalid cache without
notification of the change, so this conflict can be resolved by maintaining a
coherent view of the data.

Master-Slave Multiprocessor

In this multiprocessor model, there is a single data structure that keeps track of the ready
processes. In this model, one central processing unit works as a master and another as a
slave. All the processors are handled by a single processor, which is called the master
server.

The master server runs the operating system process, and the slave server runs the user
processes. The memory and input-output devices are shared among all the processors,
and all the processors are connected to a common bus. This system is simple and reduces
data sharing, so this system is called Asymmetric multiprocessing.

Virtualization and Threading

In this type of multiple processor scheduling, even a single CPU system acts as a multiple
processor system. In a system with virtualization, the virtualization presents one or more
virtual CPUs to each of the virtual machines running on the system. It then schedules the
use of physical CPUs among the virtual machines.

o Most virtualized environments have one host operating system and many guest
operating systems, and the host operating system creates and manages the virtual
machines.
o Each virtual machine has a guest operating system installed, and applications run
within that guest.
o Each guest operating system may be assigned for specific use cases, applications,
or users, including time-sharing or real-time operation.
o Any guest operating-system scheduling algorithm that assumes a certain amount
of progress in a given amount of time will be negatively impacted by the
virtualization.
o A time-sharing operating system tries to allot 100 milliseconds to each time slice to
give users a reasonable response time. A given 100 millisecond time slice may take
much more than 100 milliseconds of virtual CPU time. Depending on how busy the
system is, the time slice may take a second or more, which results in a very poor
response time for users logged into that virtual machine.
o The net effect of such scheduling layering is that individual virtualized operating
systems receive only a portion of the available CPU cycles, even though they
believe they are receiving all cycles and scheduling all of those cycles. The time-of-
day clocks in virtual machines are often incorrect because timers take no longer to
trigger than they would on dedicated CPUs.
o Virtualizations can thus undo the good scheduling algorithm efforts of the
operating systems within virtual machines.

 Thread scheduling
The scheduling of thread involves two boundary scheduling:

1. Scheduling of Kernel-Level Threads by the system scheduler.


2. Scheduling of User-Level Threads or ULT to Kernel-Level Threads or KLT by using Lightweight
process or LWP.
Lightweight process (LWP):
The Lightweight process is threads that act as an interface for the User-Level Threads
to access the physical CPU resources.
The number of the lightweight processes depends on the type of application, for an I\
O bound application the number of LWP depends on the user level threads, and for
CPU bound application each thread is connected to a separate kernel-level thread.

In real-time, the first boundary of thread scheduling is beyond specifying the


scheduling policy and the priority, therefore, it requires two controls to be specified
for the User level threads:
1. Contention scope – Control scope defines the extent to which contention takes place.
Contention refers to the competition among the ULTs to access the KLTs.
Contention scope can be further classified into Process Contention Scope
(PCS) and System Contention Scope (SCS).

 Process Contention Scope: Process Contention Scope is when the contention takes place in
the same process.
 System contention scope (SCS): System Contention Scope refers to the contention that takes
place among all the threads in the system.
Allocation domain – The allocation domain is a set of multiple (or single) resources for
which a thread is competing.
Advantages of PCS over SCS:
The advantages of PCS over SCS are as follows:
1. It is cheaper.
2. It helps reduce system calls and achieve better performance.
3. If the SCS thread is a part of more than one allocation domain, the system will have to
handle multiple interfaces.
4. PCS thread can share one or multiple available LWPs, while every SCS thread needs a
separate LWP. Therefore, for every system call, a separate KLT will be created.

 Inter-process communication
Interprocess communication is the mechanism provided by the operating system that
allows processes to communicate with each other. This communication could involve a
process letting another process know that some event has occurred or the transferring of
data from one process to another.
A diagram that illustrates interprocess communication is as follows −

Synchronization in Interprocess Communication

Synchronization is a necessary part of interprocess communication. It is either provided by


the interprocess control mechanism or handled by the communicating processes. Some of
the methods to provide synchronization are as follows −

 Semaphore
A semaphore is a variable that controls the access to a common resource by multiple
processes. The two types of semaphores are binary semaphores and counting
semaphores.
 Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section
at a time. This is useful for synchronization and also prevents race conditions.
 Barrier
A barrier does not allow individual processes to proceed until all the processes reach
it. Many parallel languages and collective routines impose barriers.
 Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a loop while
checking if the lock is available or not. This is known as busy waiting because the
process is not doing any useful operation even though it is active.

Approaches to Interprocess Communication

The different approaches to implement interprocess communication are given as follows −

 Pipe
A pipe is a data channel that is unidirectional. Two pipes can be used to create a two-
way data channel between two processes. This uses standard input and output
methods. Pipes are used in all POSIX systems as well as Windows operating systems.
 Socket
The socket is the endpoint for sending or receiving data in a network. This is true for
data sent between processes on the same computer or data sent between different
computers on the same network. Most of the operating systems use sockets for
interprocess communication.
 File
A file is a data record that may be stored on a disk or acquired on demand by a file
server. Multiple processes can access a file as required. All operating systems use
files for data storage.
 Signal
Signals are useful in interprocess communication in a limited way. They are system
messages that are sent from one process to another. Normally, signals are not used
to transfer data but are used for remote commands between processes.
 Shared Memory
Shared memory is the memory that can be simultaneously accessed by multiple
processes. This is done so that the processes can communicate with each other. All
POSIX systems, as well as Windows operating systems use shared memory.
 Message Queue
Multiple processes can read and write data to the message queue without being
connected to each other. Messages are stored in the queue until their recipient
retrieves them. Message queues are quite useful for interprocess communication
and are used by most operating systems.
A diagram that demonstrates message queue and shared memory methods of interprocess
communication is as follows −

 Race conditions
Race conditions, Critical Sections and Semaphores are an key part of Operating systems.
Details about these are given as follows −

Race Condition

A race condition is a situation that may occur inside a critical section. This happens when
the result of multiple thread execution in critical section differs according to the order in
which the threads execute.
Race conditions in critical sections can be avoided if the critical section is treated as an
atomic instruction. Also, proper thread synchronization using locks or atomic variables
can prevent race conditions.

Critical Section

The critical section in a code segment where the shared variables can be accessed. Atomic
action is required in a critical section i.e. only one process can execute in its critical
section at a time. All the other processes have to wait to execute in their critical sections.
The critical section is given as follows:
do{
Entry Section
Critical Section
Exit Section
Remainder Section
} while (TRUE);
In the above diagram, the entry sections handles the entry into the critical section. It
acquires the resources needed for execution by the process. The exit section handles the
exit from the critical section. It releases the resources and also informs the other
processes that critical section is free.
The critical section problem needs a solution to synchronise the different processes. The
solution to the critical section problem must satisfy the following conditions −

 Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical section at
any time. If any other processes require the critical section, they must wait until it
is free.
 Progresss
Progress means that if a process is not using the critical section, then it should not
stop any other process from accessing it. In other words, any process can enter a
critical section if it is free.
 Bounded Waitings
Bounded waiting means that each process must have a limited waiting time. Itt
should not wait endlessly to access the critical section.
Semaphore

A semaphore is a signalling mechanism and a thread that is waiting on a semaphore can


be signalled by another thread. This is different than a mutex as the mutex can be
signalled only by the thread that called the wait function.
A semaphore uses two atomic operations, wait and signal for process synchronization.
The wait operation decrements the value of its argument S, if it is positive. If S is negative
or zero, then no operation is performed.
wait(S){
while (S<=0);
S--;
}
The signal operation increments the value of its argument S.
signal(S){
S++;
}

 Critical Regions
The critical section is a code segment where the shared variables can be accessed. An
atomic action is required in a critical section i.e. only one process can execute in its critical
section at a time. All the other processes have to wait to execute in their critical sections.
A diagram that demonstrates the critical section is as follows −
In the above diagram, the entry section handles the entry into the critical section. It
acquires the resources needed for execution by the process. The exit section handles the
exit from the critical section. It releases the resources and also informs the other
processes that the critical section is free.

Solution to the Critical Section Problem

The critical section problem needs a solution to synchronize the different processes. The
solution to the critical section problem must satisfy the following conditions −

 Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical section at
any time. If any other processes require the critical section, they must wait until it
is free.
 Progress
Progress means that if a process is not using the critical section, then it should not
stop any other process from accessing it. In other words, any process can enter a
critical section if it is free.
 Bounded Waiting
Bounded waiting means that each process must have a limited waiting time. Itt
should not wait endlessly to access the critical section.

Sleep and Wake


The concept of sleep and wake is very simple. If the critical section is not empty then the
process will go and sleep. It will be waked up by the other process which is currently
executing inside the critical section so that the process can get inside the critical section.

 Semaphores
Semaphores are integer variables that are used to solve the critical section problem by
using two atomic operations, wait and signal that are used for process synchronization.
The definitions of wait and signal are as follows −

 Wait
The wait operation decrements the value of its argument S, if it is positive. If S is
negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);

S--;
}

 Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}

 Types of Semaphores

There are two main types of semaphores i.e. counting semaphores and binary
semaphores. Details about these are given as follows −

 Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These
semaphores are used to coordinate the resource access, where the semaphore
count is the number of available resources. If the resources are added, semaphore
count automatically incremented and if the resources are removed, the count is
decremented.
 Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to
0 and 1. The wait operation only works when the semaphore is 1 and the signal
operation succeeds when semaphore is 0. It is sometimes easier to implement
binary semaphores than counting semaphores.

Advantages of Semaphores

Some of the advantages of semaphores are as follows −

 Semaphores allow only one process into the critical section. They follow the mutual
exclusion principle strictly and are much more efficient than some other methods
of synchronization.
 There is no resource wastage because of busy waiting in semaphores as processor
time is not wasted unnecessarily to check if a condition is fulfilled to allow a
process to access the critical section.
 Semaphores are implemented in the machine independent code of the
microkernel. So they are machine independent.

Disadvantages of Semaphores

Some of the disadvantages of semaphores are as follows −

 Semaphores are complicated so the wait and signal operations must be


implemented in the correct order to prevent deadlocks.
 Semaphores are impractical for last scale use as their use leads to loss of
modularity. This happens because the wait and signal operations prevent the
creation of a structured layout for the system.
 Semaphores may lead to a priority inversion where low priority processes may
access the critical section first and high priority processes later.

 MUTEX
Mutex and Semaphore both provide synchronization services but they are not the same.
Details about both Mutex and Semaphore are given below −
Mutex
Mutex is a mutual exclusion object that synchronizes access to a resource. It is created
with a unique name at the start of a program. The Mutex is a locking mechanism that
makes sure only one thread can acquire the Mutex at a time and enter the critical section.
This thread only releases the Mutex when it exits the critical section.
This is shown with the help of the following example −
wait (mutex);
…..
Critical Section

…..
signal (mutex);
A Mutex is different than a semaphore as it is a locking mechanism while a semaphore is
a signalling mechanism. A binary semaphore can be used as a Mutex but a Mutex can
never be used as a semaphore.

 MONITORS
Monitors and semaphores are used for process synchronization and allow processes to
access the shared resources using mutual exclusion. However, monitors and semaphores
contain many differences. Details about both of these are given as follows −
Monitors
Monitors are a synchronization construct that were created to overcome the problems
caused by semaphores such as timing errors.
Monitors are abstract data types and contain shared data variables and procedures. The
shared data variables cannot be directly accessed by a process and procedures are
required to allow a single process to access the shared data variables at a time.
Only one process can be active in a monitor at a time. Other processes that need to
access the shared variables in a monitor have to line up in a queue and are only provided
access when the previous process release the shared variables.
 MESSAGE PASSING
Message Passing provides a mechanism to allow processes to communicate and to
synchronize their actions without sharing the same address space.
For example − chat programs on World Wide Web.
Now let us discuss the message passing step by step.
Step 1 − Message passing provides two operations which are as follows −
 Send message Receive message
 Receive message
Messages sent by a process can be either fixed or variable size.
Step 2 − For fixed size messages the system level implementation is straight forward. It
makes the task of programming more difficult.
Step 3 − The variable sized messages require a more system level implementation but the
programming task becomes simpler.
Step 4 − If process P1 and P2 want to communicate they need to send a message to and
receive a message from each other that means here a communication link exists between
them.
Step 5 − Methods for logically implementing a link and the send() and receive() operations.
Given below is the structure of message passing technique −

Characteristics

The characteristics of Message passing model are as follows −


 Mainly the message passing is used for communication.
 It is used in distributed environments where the communicating processes are
present on remote machines which are connected with the help of a network.
 Here no code is required because the message passing facility provides a mechanism
for communication and synchronization of actions that are performed by the
communicating processes.
 Message passing is a time consuming process because it is implemented through
kernel (system calls).
 It is useful for sharing small amounts of data so that conflicts need not occur.
 In message passing the communication is slower when compared to shared memory
technique.

 The dining philosophers


The Dining Philosopher Problem – The Dining Philosopher Problem states that K
philosophers seated around a circular table with one chopstick between each pair of
philosophers. There is one chopstick between each philosopher. A philosopher may eat
if he can pick up the two chopsticks adjacent to him. One chopstick may be picked up by
any one of its adjacent followers but not both

 Reader-Writer problem
The Reader-Writer problem in Operating System which is used for process
synchronization.If you have no idea about the process synchronization, then learn
from here. Also, learn about semaphore from here. Now, you are done with the
prerequisites of the blog, so let's get started with the reader-writer problem.

What's the problem? In an Operating System, we deal with various processes and these
processes may use files that are present in the system. Basically, we perform two
operations on a file i.e. read and write. All these processes can perform these two
operations. But the problem that arises here is that:

 If a process is writing something on a file and another process also starts writing
on the same file at the same time, then the system will go into the inconsistent
state. Only one process should be allowed to change the value of the data
present in the file at a particular instant of time.
 Another problem is that if a process is reading the file and another process is
writing on the same file at the same time, then this may lead to dirty-read
because the process writing on the file will change the value of the file, but the
process reading that file will read the old value present in the file. So, this should
be avoided.
The solution
We can solve the above two problems by using the semaphore variable(learn more
about semaphore from here). The following is the proposed solution:

 If a process is performing some write operation, then no other process should be


allowed to perform the read or the write operation i.e. no other process should
be allowed to enter into the critical section(learn more about critical section
from here).
 If a process is performing some read operation only, then another process that is
demanding for reading operation should be allowed to read the file and get into
the critical section because the read operation doesn't change anything in the
file. So, more than one reads are allowed. But if a process is reading a file and
another process is demanding for the write operation, then it should not be
allowed.
So, we will use the above two concepts and solve the reader-writer problem with the
help of semaphore variables. The following semaphore variables will be used in our
solution:

 Semaphore "writer": This semaphore is used to achieve the mutual exclusion


property. It is used by the process that is writing in the file and it ensures that no
other process should enter the critical section at that instant of time. Initially, it
will be set to "1".
 Semaphore "mutex": This semaphore is used to achieve mutual exclusion during
changing the variable that is storing the count of the processes that are reading a
particular file. Initially, it will be set to "1".
Apart from these two semaphore variables, we have one variable readerCount that will
have the count of the processes that are reading a particular file.
The readerCount variable is initially initialized to 0.

We will also use two function wait() and signal(). The wait() function is used to reduce
the value of a semaphore variable by one and the signal() function is used to increase
the value of a semaphore variable by one.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy