BCS303 - Module 2
BCS303 - Module 2
OPERATINGSYSTEMS
(BCS303)
MODULE-2
SESSION 15
Process Concept
∙ A process is a program under execution.
∙ Its current activity is indicated by PC (Program Counter) and the contents of the processor's
registers.
The Process:
Process memory is divided into four sections as shown in the figure 3.1.
∙ The stack is used to store temporary data such as local variables, function parameters, function
return values, return address etc.
∙ The heap which is memory that is dynamically allocated during process run time ∙
The data section stores global variables.
∙ The text section comprises the compiled program code.
∙ Note that, there is a free space between the stack and the heap. When the stack is full, it grows
downwards and when the heap is full, it grows upwards.
Canara Engineering College, Mangaluru
Process State:
A Process has 5 states as shown in Figure 3.2. Each process may be in one of the following states –
Figure
3.2: Diagram of process state
The PCB as shown in Figure 3.3 simply serves as the repository for any information that may
vary from process to process.
Context switching: The task of switching a CPU from one process to another process is called
context switching. Context-switch times are highly dependent on hardware support (Number of
CPU registers).
∙ Whenever an interrupt occurs (hardware or software interrupt), the state of the currently
running process is saved into the PCB and the state of another process is restored from the PCB
to the CPU.
∙ Context switch time is an overhead, as the system does not do useful work while switching.
Canara Engineering College, Mangaluru
OPERATING SYSTEMS Module-2
Figure
3.4:
Process Scheduling
Scheduling Queues:
∙ As processes enter the system, they are put into a job queue, which consists of all processes in the
system.
∙ The processes that are residing in main memory and are ready and waiting to execute are kept on
a list called the ready queue. This queue is generally stored as a linked list. ∙ A ready-queue header
contains pointers to the first and final PCBs in the list. Each PCB includes a pointer field that
points to the next PCB in the ready queue.
Ready Queue and Various I/O Device Queues is shown in Figure 3.5.
Canara Engineering College, Mangaluru
OPERATING SYSTEMS Module-2
Figure 3.5 : The ready queue and various I/O device queues
A common representation of process scheduling is a queueing diagram as depicted in Figure 3.6. Each
rectangular box in the diagram represents a queue. Two types of queues are present: the ready queue and
a setof device queues. The circles represent the resources that serve the queues, and the arrows indicate
the flow of processes in the system.
Figure 3.6: Queueing-diagram representation of process scheduling.
A new process is initially put in the ready queue. It waits in the ready queue until it is selected for
execution and is given the CPU. Once the process is allocated the CPU and is executing, one of several
events could occur:
∙ The process could issue an I/O request, and then be placed in an I/O queue.
∙ The process could create a new subprocess and wait for its termination.
∙ The process could be removed forcibly from the CPU, as a result of an interrupt,
and be put back in the ready queue.
In the first two cases, the process eventually switches from the waiting state to the ready state, and is
then put back in the ready queue. A process continues this cycle until it terminates, at which time it is
An efficient scheduling system will select a good mix of CPU-bound processes and I/O bound
processes.
∙ If the scheduler selects more I/O bound process, then I/O queue will be full and ready queue
will be empty.
∙ If the scheduler selects more CPU bound process, then ready queue will be full and I/O queue
will be empty.
Time sharing systems employ a medium-term scheduler. It swaps out the process from ready
queue and swap in the process to ready queue. When system loads get high, this scheduler will
swap one or more processes out of the ready queue for a few seconds, in order to allow smaller
faster jobs to finish up quickly and clear the system. Figure 3.8 shows the employment of
medium term scheduler.
Review Questions:
1. What is process?
2. How is process different from thread?
3. List the components of process?
4. What is Process Control block (PCB)?
5. What do you mean by context switching?
6. What is the sequence of steps involved in context switching? 7.
Differentiate between short term, medium term and long-term schedulers. 8.
What is the difference between CPU bound and I/O bound processes?
SESSION 16
Operations on Processes
1. Process creation
A process may create several new processes. The creating process is called a parent process,
and the new processes are called the children of that process. Each of these newprocesses may
in turn create other processes. Every process has a unique process ID.
∙ Figure 3.9 shows the tree of processes in typical Solaris systems, the process at the top
of the tree is the ‘sched’ process with PID of 0. The ‘sched’ process creates several
children processes – init, pageout and fsflush. Pageout and fsflush are responsible
for managing memory and file systems. The init process with a PID of 1, serves as a
parent
process for all user processes.
A process will need certain resources (CPU time, memory, files, I/O devices) to accomplish
its task. When a process creates a subprocess, the subprocess may be able to obtain its
resources in two ways:
∙ directly from the operating system
∙ Subprocess may take the resources of the parent
process.The resource can be taken from parent in two
8
OPERATING SYSTEMS Module-2
ways –
▪ The parent may have to partition its resources among its children
▪ Share the resources among several children
In UNIX OS, a child process can be created by fork() system call. The fork system call,
if successful, returns the PID of the child process to its parents and returns a zero to
the child process. If failure, it returns -1 to the parent. Process IDs of current process
or its direct parent can be accessed using the getpid( ) and getppid( ) system calls
respectively.
The parent waits for the child process to complete with the wait() system call. When the
childprocess completes, the parent process resumes and completes its execution.
There are two options for the parent process after creating the child:
∙ Wait for the child process to terminate and then continue execution. The parent makes a
wait()system call.
∙ Run concurrently with the child, continuing to execute without waiting.
Two possibilities for the address space of the child relative to the parent:
∙ The child may be an exact duplicate of the parent, sharing the same program and data
segments in memory. Each will have their own PCB, including program counter,
registers, and PID. This is the behaviour of the fork system call in UNIX.
9
OPERATING SYSTEMS Module-2
∙ The child process may have a new program loaded into its address space, with all new
code and data segments. This is the behaviour of the spawn system calls in
Windows.
In windows the child process is created using the function createprocess( ). The
createprocess( )returns 1, if the child is created and returns 0, if the child is not created.
2. Process Termination
A process terminates when it finishes executing its last statement and asks the operating
systemto delete it, by using the exit () system call. All of the resources assigned to the process like
memory, open files, and I/O buffers, are deallocated by the operating system.
A process can cause the termination of another process by using appropriate system call.
The parent process can terminate its child processes by knowing of the PID of the child.
A parent may terminate the execution of children for a variety of reasons, such as: ∙
The child has exceeded its usage of the resources, it has been allocated. ∙ The task
assigned to the child is no longer required.
∙ The parent is exiting, and the operating system terminates all the children. This
is called cascading termination.
10
OPERATING SYSTEMS Module-2 Inter-process
Communication(IPC)
Processes executing may be either co-operative or independent processes.
∙ Independent Processes – processes that cannot affect other processes or be affected by
otherprocesses executing in the system.
∙ Cooperating Processes – processes that can affect other processes or be affected by
otherprocesses executing in the system.
2. Useful for sending large block of data Useful for sending small data.
11
OPERATING SYSTEMS Module-2
Shared Memory
∙ A region of shared-memory is created within the address space of a process, which needs to
communicate. Other process that needs to communicate uses this shared memory. ∙ The form
of data and position of creating shared memory area is decided by the process. Generally, a
few messages must be passed back and forth between the cooperating processes first in order
to set up and coordinate the shared memory access.
∙ The process should take care that the two processes will not write the data to the shared
memory at the same time.
This is a classic example, in which one process is producing data and another process is
consuming the data. The data is passed via an intermediary buffer (shared memory). The
producer puts the data to the buffer and the consumer takes out the data from the buffer. A
producer can produce one item while the consumer is consuming another item. The
producer and consumer must be synchronized, so that the consumer does not try to
consume an item that has not yet been produced. In this situation, the consumer must wait
until an item is produced.
There are two types of buffers into which information can be put –
∙ Unbounded buffer: there is no limit on the size of the buffer, and so on the data
produced by producer. But the consumer may have to wait for new items
∙ Bounded buffer: As the buffer size is fixed. The producer has to wait if the buffer isfull and
the consumer has to wait if the buffer is empty.
12
OPERATING SYSTEMS Module-2
13
OPERATING SYSTEMS Module-2
Message-Passing Systems:
A mechanism to allow process communication without sharing address space. It is used in
distributedsystems.
∙ Message passing systems uses system calls for "send message" and "receive message". ∙
A communication link must be established between the cooperating processes before
messagescan be sent.
∙ There are three methods of creating the link between the sender and the receiver o
Direct or indirect communication (naming)
o Synchronous or asynchronous communication (Synchronization)
o Automatic or explicit buffering.
o
a) Direct communication the sender and receiver must explicitly know each other’s name. The
syntaxfor send() and receive() functions are as follows-
Disadvantages of direct communication – any changes in the identifier of a process, may have to
change the identifier in the whole system (sender and receiver), where the messages are sent
and received.
A mailbox or port is used to send and receive messages. Mailbox is an object into which messages
can be sent and received. It has a unique ID. Using this identifier messages are sent and received.
Two processes can communicate only if they have a shared mailbox. The send and receive functions
are –
∙ send (A, message) – send a message to mailbox A
∙ receive (A, message) – receive a message from mailbox A
Blocking (synchronous) send - sending process is blocked (waits) until the message
isreceived by receiving process or the mailbox.
Non-blocking (asynchronous) send - sends the message and continues (does not wait)
Zero capacity – The buffer size is zero (buffer does not exist). Messages are not
stored inthe queue. The senders must block until receivers accept the messages.
Bounded capacity- The queue is of fixed size(n). Senders must block if the queue is
full. After sending ‘n’ bytes the sender is blocked.
Unbounded capacity - The queue is of infinite capacity. The sender never
blocks. Review Questions:
16
OPERATING SYSTEMS Module-2
SESSION 17
Multithreaded Programming
∙ A thread is a basic unit of CPU utilization.
∙ It consists of
▪ thread ID
▪ PC
▪ register-set and
▪ stack.
∙ It shares with other threads belonging to the same process its code-section &data-section.
∙ A traditional (or heavy weight) process has a single thread ofcontrol.
∙ If a process has multiple threads of control, it can perform more than one task at a time. such
a process is called multithreaded process.
17
OPERATING SYSTEMS Module-2
2. In some situations, a single application may be required to perform several similar tasks.
For ex:A web-server may create a separate thread for each client requests. This allows the
server to service several concurrent requests.
Multithreading Models
Many-to-One Model
∙ Many user-level threads are mapped to one kernel thread.
Advantages:
▪ Thread management is done by the thread library in user space, so it is efficient.
Disadvantages:
▪ The entire process will block if a thread makes a blocking system-call.
▪ Multiple threads are unable to run in parallel on multiprocessors.
∙ For example:
▪ Solaris green threads
▪ GNU portable threads.
Fig: Many-to-one model
One-to-One Model
∙ Each user thread is mapped to a kernel thread.
Advantages:
▪ It provides more concurrency by allowing another thread to run when a thread
makes a blocking system-call.
▪ Multiple threads can run in parallel on multiprocessors.
Disadvantage:
19
OPERATING SYSTEMS Module-2
▪ Creating a user thread requires creating the corresponding kernel thread.
∙ For example:
▪ Windows NT/XP/2000, Linux
Many-to-Many Model
20
OPERATING SYSTEMS Module-2
▪ Tru64 UNIX
Review Questions:
1. What is a thread?
2. What does thread comprises?
3. What is a multithreaded process
4. Explain multithreaded process in detail
5. Explain single and multithreaded process
6. Describe the motivation of multithreaded process
7. Explain benefits of multithreaded process
8. Write a note on multithreading models.
21
OPERATING SYSTEMS Module-2
SESSION 18
Thread Libraries
∙ It provides the programmer with an API for the creation and management of threads.
Pthreads
∙ This is a POSIX standard API for thread creation and synchronization.
∙ This is a specification for thread-behavior, not an implementation.
∙ OS designers may implement the specification in any way they wish.
∙ Commonly used in: UNIX and Solaris.
22
OPERATING SYSTEMS Module-2
Win32 threads
∙ Implements the one-to-one mapping
∙ Each thread contains
▪ A thread id
▪ Register set
▪ Separate user and kernel stacks ▪
Private data storage area
23
OPERATING SYSTEMS Module-2
∙ The register set, stacks, and private storage area are known as the context of the threads The primary
data structures of a thread include:
▪ ETHREAD (executive thread block)
▪ KTHREAD (kernel thread block)
▪ TEB (thread environment block)
Java Threads
∙ Threads are the basic model of program-execution in
▪ Java program and
▪ Java language.
∙ The API provides a rich set of features for the creation and management of threads.
∙ All Java programs comprise at least a single thread of control.
∙ Two techniques for creating threads:
1. Create a new class that is derived from the Thread class and override its run() method. 2.
Define a class that implements the Runnable interface. The Runnable interface is defined as
follows:
Threading Issues
Thread Pools
∙ The basic idea is to
▪ create a no. of threads at process-startup and
▪ place the threads into a pool (where they sit and wait for work).
∙ Procedure:
1. When a server receives a request, it awakens a thread from the pool.
2. If any thread is available, the request is passed to it for service.
3. Once the service is completed, the thread returns to the pool.
∙ Advantages:
▪ Servicing a request with an existing thread is usually faster than waiting to create a thread.
▪ The pool limits the no. of threads that exist at any one point.
∙ No. of threads in the pool can be based on actors such as
▪ no. of CPUs
▪ amount of memory and
▪ expected no. of concurrent client-requests.
Scheduler Activations
∙ Both M:M and Two-level models require communication to maintain the appropriate
number of kernel threads allocated to the application.
∙ Scheduler activations provide upcalls-a communication mechanism from the kernel to the
thread library
∙ This communication allows an application to maintain the correct number kernel threads. ∙
One scheme for communication between the user-thread library and the kernel is known as
scheduler activation.
Review Questions:
26
OPERATING SYSTEMS Module-2
SESSION 19
PROCESS SCHEDULING
Basic Concepts: In a single-processor system, only one process may run at a time and other processes
must wait until the CPU is rescheduled. The main objective of multiprogramming is to have some
process running at all times, in order to maximize CPU utilization.
CPU-I/O Burst Cycle: Process execution consists of a cycle of CPU execution and an I/O wait as
shown in below figure. Process execution begins with a CPU burst, followed by an I/O burst, then
another CPU burst, etc… Finally, a CPU burst ends with a request to terminate execution. An
I/O-bound program typically has many short CPU bursts. A CPU-bound program might have a few
long CPU bursts.
CPU SCHEDULER
CPU scheduler selects a waiting-process from the ready-queue and allocates CPU to the waiting
process. The ready-queue could be a FIFO, priority queue, tree and list. The records in the queues are
generally process control blocks (PCBs) of the processes.
27
OPERATING SYSTEMS Module-2
CPU SCHEDULING
Once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU
either by terminating or by switching to the waiting state.
Preemptive Scheduling
This is driven by the idea of prioritized computation. Processes that are runnable may be temporarily
suspended
Disadvantages:
1) Incurs a cost associated with access to shared-data.
2) Affects the design of the OS kernel.
Dispatcher
It gives control of the CPU to the process selected by the short-term scheduler. The function involves:
1) Switching context
2) Switching to user mode &
3) Jumping to the proper location in the user program to restart that program.
It should be as fast as possible, since it is invoked during every process switch.
Dispatch latency means the time taken by the dispatcher to stop one process and to start another
process to run.
28
OPERATING SYSTEMS Module-2
SCHEDULING CRITERIA USED IN OS
The various scheduling criteria used in OS are:
1. CPU Utilization
2. Throughput
3. Turnaround time
4. Waiting time
5. Response time
CPU Utilization: We must keep the CPU as busy as possible. In a real system, it ranges from 40% to
90%.
Throughput: The number of processes completed per time unit. For long processes, throughput may
be 1 process per hour; For short transactions, throughput might be 10 processes per second.
Turnaround Time: The interval from the time of submission of a process to the time of completion.
Turnaround time is the sum of the periods spent in waiting to get into memory, waiting in the ready
queue, executing on the CPU and doing I/O.
Waiting Time: The amount of time that a process spends waiting in the ready-queue.
Response Time: The time from the submission of a request until the first response is
produced.
SCHEDULING ALGORITHMS
CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be
allocated the CPU. Following are some scheduling algorithms:
1) FCFS scheduling (First Come First Served)
2) Round Robin scheduling
3) SJF scheduling (Shortest Job First)
4) SRT scheduling(Shortest Remaining Time First)
5) Priority scheduling with and without preemption.
6) Multilevel Queue scheduling and
7) Multilevel Feedback Queue scheduling
FCFS SCHEDULING
The process that requests the CPU first is allocated the CPU first. That means process which arrives
the ready-queue first, get scheduled first if the CPU is free. This is a non-preemptive scheduling
concept. The implementation is easily done using a FIFO queue.
29
OPERATING SYSTEMS Module-2
Procedure:
1) When a process enters the ready-queue, its PCB is linked onto the tail of the queue.
2) When the CPU is free, the CPU is allocated to the process at the queue‘s head. 3)
The running process is then removed from the queue.
Disadvantages:
1) Convoy effect: All other processes wait for one big process to get off the
CPU. 2) Non-preemptive (a process keeps the CPU until it releases it).
3) Not good for time-sharing systems.
4) The average waiting time is generally not minimal.
Example: Suppose that the processes arrive in the order P1, P2, P3.
Review Questions:
30
OPERATING SYSTEMS Module-2
3. Explain the advantages and disadvantages of FCFS algorithm
4. Consider the following set of processes with CPU burst time (in ms)
Process Arrival time Burst Time
P0 0 6
P1 1 3
P2 2 1
P3 3 4
Compute the waiting time and average turnaround time for the above process using FCFS scheduling
algorithm.
5.Write the scheduling criteria for scheduling.
SESSION 20
The CPU is assigned to the process that has the smallest next CPU burst. If two processes have the
same length CPU burst, FCFS scheduling is used to break the tie. For long-term scheduling in a batch
system, we can use the process time limit specified by the user, as the ‗length‘. SJF can't be
implemented at the level of short-term scheduling, because there is no way to know the length of the
next CPU burst.
Advantage: The SJF is optimal, i.e. it gives the minimum average waiting time for a given set of
processes.
Example (for non-preemptive SJF): Consider the following set of processes, with the length of
the CPU-burst time given in milliseconds.(Arrival of process according to their Burst time)
31
Example (preemptive SJF): Consider the following set of processes, with the length of the CPU
burst time given in milliseconds.
The average waiting time is ((10 - 1) + (1 - 1) + (17 - 2) + (5 - 3))/4 = 26/4 = 6.5ms. Review
Questions:
32
OPERATING SYSTEMS Module-2
P1 0 10
P2 2 5
P3 3 2
P4 5 20
Draw Gantt charts and calculate average waiting time, average turnaround time using following CPU
scheduling algorithm
i. Preemptive shortest job
ii. Non preemptive SJF
2.For the following example calculate average waiting time and average turnaround time using
preemptive SJF CPU scheduling algorithms
P1 0 8
P2 1 4
P3 2 9
P4 3 5
33
OPERATING SYSTEMS Module-2
SESSION 21
PRIORITY SCHEDULING
A priority is associated with each process. The CPU is allocated to the process with the highest
priority. Equal-priority processes are scheduled in FCFS order. Priorities can be defined either
internally or externally. Internally-defined priorities use some measurable quantity to compute the
priority of a process.
For example: time limits, memory requirements, no. of open files.
Disadvantage: Indefinite blocking, where low-priority processes are left waiting indefinitely for CPU.
Solution: Aging is a technique of increasing priority of processes that wait in system for a long time.
Example: Consider the following set of processes, assumed to have arrived at time 0, in the
order PI, P2, ..., P5, with the length of the CPU-burst time given in milliseconds.
34
OPERATING SYSTEMS Module-2
Review Questions:
1.Consider the following set of processes given in the table
Process Arrival time Burst Time Priority
P1 0 10 4
P2 3 5 2
P3 3 6 6
P4 5 4 3
Consider the large number as highest priority. Calculate the average waiting time and turnaround time
and draw Gantt chart for preemptive priority scheduling
2. Given below is the snapshot of processes. Draw Gantt charts using preemptive and non-preemptive
priority scheduling algorithm. (A smaller number has a higher priority) Also, calculate the average
waiting time and turnaround time for both.
P1 0 6 4
P2 3 5 2
P3 3 3 6
P4 5 5 3
35
OPERATING SYSTEMS Module-2
SESSION 22
It is designed especially for timesharing systems. It is similar to FCFS scheduling, but with
preemption. A small unit of time is called a time quantum (or time slice), which ranges from 10 to
100 ms.
The ready-queue is treated as a circular queue. The CPU scheduler goes around the ready-queue and
allocates the CPU to each process for a time interval of up to one time quantum. To implement this
algorithm, the ready-queue is kept as a FIFO queue of processes
CPU scheduler
1. Picks the first process from the ready-queue.
2. Sets a timer to interrupt after one time quantum and
3. Dispatches the process.
One of two things will then happen.
1. The process may have a CPU burst of less than that of time quantum. In this case, the
process itself will release the CPU voluntarily.
2. If the CPU burst of the currently running process is longer than that of time quantum, the
timer will go off and will cause an interrupt to the OS. The process will be put at the tail of the
ready-queue.
Example: Consider the following set of processes that arrive at time 0, with the length of the
CPU- burst time given in milliseconds.(Time quantum = 4ms)
36
OPERATING SYSTEMS Module-2
NOTE:
The RR scheduling algorithm is preemptive. No process is allocated the CPU for more than one time
quantum in a row. If a process' CPU burst exceeds the time quantum, that process is preempted and is
put back in the ready-queue.
The performance of algorithm depends heavily on the size of the time quantum
If time quantum= very large, RR policy is the same as the FCFS policy.
If time quantum = very small, RR approach appears the users as though each of n processes
If time quantum = very small, RR approach appears to the users as though each of n
processes has its own processor running at l/n the speed of the real processor.
In software, we need to consider the effect of context switching on the performance of RR scheduling
1) Larger the time quantum for a specific process time, less time is spend on context
switching.
2) The smaller the time quantum, more overhead is added for the purpose of context
switching.
For example, separate queues might be used for foreground and background processes.
A process may move between queues. The basic idea is to separate processes according to the features
of their CPU bursts.
38
OPERATING SYSTEMS Module-2
For example: If a process uses too much CPU time, it will be moved to a lower-priority queue. This
scheme leaves I/O-bound and interactive processes in the higher-priority queues. If a process waits too
long in a lower-priority queue, it may be moved to a higher- priority queue. This form of aging
prevents starvation.
If multiple CPUs are available, the scheduling problem becomes more complex. The two approaches
are:
Asymmetric Multiprocessing
The basic idea is: A master server is a single processor responsible for all scheduling decisions, I/O
processing and other system activities. The other processors execute only user code.
Advantage: This is simple because only one processor accesses the system data structures, reducing
the need for data sharing.
Symmetric Multiprocessing
39
OPERATING SYSTEMS Module-2
The basic idea is: Each processor is self-scheduling. To do scheduling, the scheduler for each
processor examines the ready-queue and selects a process to execute.
Restriction: We must ensure that two processors do not choose the same process and that
processes are not lost from the queue.
Processor Affinity: In SMP a system, Migration of processes from one processor to another are avoided and
instead processes are kept running on same processor. This is known as processor affinity. The two forms are:
Soft Affinity: When an OS try to keep a process on one processor because of policy, but cannot guarantee it
will happen. It is possible for a process to migrate between processors.
Hard Affinity: When an OS have the ability to allow a process to specify that it is not to migrate to other
processors. Eg: Solaris OS
Load Balancing
This concept attempts to keep the workload evenly distributed across all processors in an SMP
system. The two approaches:
1) Push Migration
⮚A specific task periodically checks the load on each processor and if it finds an
imbalance, it evenly distributes the load to idle processors.
2) Pull Migration
⮚ An idle processor pulls a waiting task from a busy processor.
THREAD SCHEDULING
On OSs, it is kernel-level threads but not processes that are being scheduled by the OS. The user-level
threads are managed by a thread library, and the kernel is unaware ofthem. To run on a CPU, user-level
threads must be mapped to an associated kernel-level thread.
Contention Scope
Two approaches:
1) Process-Contention scope
⮚ On systems implementing the many-to-one and many-to-many models, the thread
library schedules user-level threads to run on an available LWP.
⮚ Competition for the CPU takes place among threads belonging to the same process.
40
OPERATING SYSTEMS Module-2
2) System-Contention scope
⮚ The process of deciding which kernel thread to schedule on the CPU.
⮚ Competition for the CPU takes place among all threads in the system.
⮚ Systems using the one-to-one model schedule threads using only SCS.
Pthread Scheduling
Pthread API that allows specifying either PCS or SCS during thread creation.
Pthread IPC provides following two functions for getting and setting the contention scope
policy: 1) pthread_attr_setscope(pthread_attr_t *attr, int scope)
2) pthread_attr_getscope(pthread_attr_t *attr, int *scope)
Review Questions:
P1 0 6
P2 2 3
P3 4 3
P4 5 5
Draw Gantt charts and calculate average waiting time, average turnaround time using RR (quantum =
1msec)
41