0% found this document useful (0 votes)
13 views55 pages

6910 OS Revesion Important Topics

The document discusses standard descriptors in Unix/Linux, including standard input, output, and error, along with redirection techniques for managing these streams. It also covers inter-process communication methods such as pipes and threads, detailing their advantages and scheduling algorithms like FCFS, SJF, and priority scheduling. Additionally, it addresses the critical section problem and solutions like Peterson's algorithm for ensuring mutual exclusion among processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views55 pages

6910 OS Revesion Important Topics

The document discusses standard descriptors in Unix/Linux, including standard input, output, and error, along with redirection techniques for managing these streams. It also covers inter-process communication methods such as pipes and threads, detailing their advantages and scheduling algorithms like FCFS, SJF, and priority scheduling. Additionally, it addresses the critical section problem and solutions like Peterson's algorithm for ensuring mutual exclusion among processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 55

Standard Descriptors in Unix/Linux

• Three files are automatically opened by the kernel for


every process
– to read its input from file
– send its output to file
– Send error messages to file
• These are known as Standard file descriptors. Standard
files, their descriptors, and their default attachments
are:
– Standard input: 0 (keyboard)
– Standard output: 1 (display screen)
– Standard error: 2 (display screen)
Redirection in UNIX/Linux
Input, output and error redirection in
UNIX/Linux
• Redirection in Linux is a technique that allows
users to redirect
– Input,
– output &
– Error to files
• Redirection can be accomplished using special
commands or characters
Input, output and error redirection in
UNIX/Linux
• Linux redirection features can be used
– to detach the default files from stdin, stdout, and
stderr and
– attach other files with them for a single execution
of a command.
• The act of detaching defaults files from stdin,
stdout, and stderr and attaching other files
with them is known as input, output, and
error redirection.
Input Redirection
• Here is the syntax for input redirection:
command < input-file
or
command 0< input-file
• keyboard is detached from stdin of
‘command’ and ‘inputfile’ is attached
to it, i.e., ‘command’ reads input from
‘input-file’ and not keyboard.
Output Redirection
• Here is the syntax for output redirection:
command > output-file
or
command 1> output-file
• the display screen is detached from
stdout and ‘output-file’ is attached to it,
i.e., ‘command’ sends output to ‘output-
file’ and not the display screen.
Error Redirection
• Here is the syntax for error redirection:
command > error-file
or
command 2> error-file
• the display screen is detached from
stderr and ‘error-file’ is attached to it,
i.e., error messages are sent to ‘error-
file’ and not the display screen.
Pipes
• A pipe acts as a conduit allowing two related
processes to communicate.
• Pipes were one of the first IPC mechanisms in
early UNIX systems.
• They typically provide one of the simpler ways
for processes to communicate with one
another
Ordinary Pipes
• Ordinary pipes allow two processes to communicate in
standard producer–consumer fashion:
– the producer writes to one end of the pipe
• (the write-end)
– and the consumer reads from the other end
• (the read-end).
• As a result, ordinary pipes are unidirectional, allowing
only one-way communication.
• If two-way communication is required, two pipes must
be used, with each pipe sending data in a different
direction.
Ordinary Pipes
• On UNIX systems, ordinary pipes are constructed using the
function
pipe(int fd[2])
• Pipe takes one argument i.e address of integer array of two elements
• This function creates a pipe that is accessed through the int fd[]
file descriptors:
– fd[0] is the read-end of the pipe, and
– fd[1] is the write-end.
• UNIX treats a pipe as a special type of file.
• Thus, pipes can be accessed using ordinary read() and write()
system calls.
Named Pipe (FIFO)
• Used for communication between related or
unrelated processes on the same UNIX/Linux system
• Communication can be bidirectional
• no parent–child relationship is required
• On both UNIX and Windows systems, once the
processes have finished communicating and have
terminated, the ordinary pipe ceases to exist.
• named pipes continue to exist after communicating
processes have finished.
Named Pipe (FIFO)
• Named pipes are referred to as FIFOs in UNIX systems
• A FIFO is created with the mkfifo() system call and
manipulated with the ordinary
– open(),
– read(),
– write(), and
– close() system calls.
• In Windows named pipes are
– created with the CreateNamedPipe() function, and
– A client can connect to a named pipe using ConnectNamedPipe().
• Communication over the named pipe can be accomplished using
the ReadFile() and WriteFile() functions.
BSD sockets
• The BSD sockets are used for communication
between related or unrelated processes on
the same system or unrelated processes on
different systems
What output will be at Line A?
#include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
int value = 5;
int main()
{
pid t pid;
pid = fork();
if (pid == 0) { /* child process */
value += 15;
return 0;
}
else if (pid > 0) { /* parent process */
wait(NULL);
printf("PARENT: value = %d",value); /* LINE A */
return 0;
}
}
Answer
• The result is still 5 as the child updates its copy
of value. When control returns to the parent,
its value remains at 5.
Thread
• A thread is a “lightweight” process which executes within the address
space of a process
• A thread is a basic unit of CPU utilization; it comprises
– a thread ID,
– a program
– a register set, and
– a stack
• A traditional (or heavyweight) process has a single thread of control.
• If a process has multiple threads of control, it can perform more than one
task at a time.
• Most software applications that run on modern computers are
multithreaded
• Allowing multiple threads to run simultaneously within a single process is
called multithreading.
Advantages of Multithreading
1. Responsiveness: Multithreading an interactive
application may allow a program to continue running
even if part of it is blocked.
• For Example
– when a user clicks a button it results in the performance of
a time-consuming operation.
– A single-threaded application would be unresponsive to
the user until the operation had completed.
– In contrast, if the time-consuming operation is performed
in a separate thread, the application remains responsive to
the user.
Advantages of Multithreading
2. Resource sharing: Threads share the memory
and the resources of the process to which
they belong by default.
• The benefit of sharing code and data is that
– it allows an application to have several different
threads of activity within the same address space.
Advantages of Multithreading
• 3. Economy. Allocating memory and resources
for process creation is costly.
• For example, In Solaris,
– creating a process is about thirty times slower
than is creating a thread, and
– context switching is about five times slower.
Advantages of Multithreading
• Scalability. The benefits of multithreading can
be even greater in a multiprocessor
architecture,
– threads may be running in parallel on different
processing cores.
– A single-threaded process can run on only one
processor, regardless how many are available.
User and Kernel Threads
• User Threads: user threads are supported above the
kernel and are managed without kernel support
– implemented by a thread library at the user level.
– the kernel is unaware of user-level threads,
– all thread creation and scheduling are done in the user space
– without the need for kernel intervention,
– and therefore are fast to create and manage.
• Kernel Threads: kernel threads are supported and
managed directly by the operating system.
• All operating systems—including Windows, Linux, Mac OS
X, and Solaris— support kernel threads.
CPU-scheduler
• CPU-scheduling decisions may take place under the
following four circumstances:
1. When a process switches from the running state to the waiting
state
• (for example, as the result of an I/O request or an invocation of wait()
for the termination of a child process)
2. When a process switches from the running state to the ready
state
• (for example, when an interrupt occurs)
3. When a process switches from the waiting state to the ready
state
• (for example, at completion of I/O)
4. When a process terminates
Scheduling Criteria
• CPU utilization:
– We want to keep the CPU as busy as possible.
• Throughput: the number of processes that are
completed per time unit, called throughput.
– If the CPU is busy executing processes, then work is
being done.
• Turnaround time: The interval from the time
of submission of a process to the time of
completion is the turnaround time.
Scheduling Criteria
• Waiting time: Waiting time is the sum of the
periods spent waiting in the ready queue.
• Response time: It is the time from the
submission of a request until the first
response is produced.
Optimization Criteria
It is desirable to
• Maximize CPU utilization
• Maximize throughput
• Minimize turnaround time
• Minimize waiting time
• Minimize response time.
FCFS: First-Come, First-Served Scheduling

• FCFS scheduling algorithm is nonpreemptive


• The process that requests the CPU first is allocated
the CPU first, regardless of it CPU burst size
• Example Process Burst Time
P1 24
P2 3
P3 3
• If the processes arrive in the order P1, P2, P3, and
are served in FCFS order.
FCFS: First-Come, First-Served Scheduling

• Gantt chart, illustrates a particular schedule,


including the start and finish times of each of
the participating processes:

• Waiting times P1 = 0; P2 = 24; P3 = 27


• Average waiting time: (0+24+27)/3 = 17
milliseconds.
FCFS: First-Come, First-Served Scheduling

• If the processes arrive in the order P2 , P3 , P1


however, the results will be as shown in the
following Gantt chart:

• Waiting times P1 = 6; P2 = 0; P3 = 3
• The average waiting time: (6 + 0 + 3)/3 = 3
milliseconds.
Main Disadvantage of FCFS Algorithm
• There is a convoy effect as all the other
processes wait for the one big process to get
off the CPU.
• If one is CPU bound process and rest are IO
bound then we see convoy effect.
• This effect results in lower CPU and device
utilization than might be possible if the
shorter processes were allowed to go first.
Shortest-Job-First (SJF) scheduling
• When the CPU is available, it is assigned to the
process that has the smallest next CPU burst.
• More appropriate term for this scheduling method
would be the shortest-next-CPU-burst algorithm
• The SJF algorithm is non-preemptive
• But the SJF algorithm also has preemptive
version.
• Preemptive SJF scheduling is sometimes called
shortest-remaining- time-first scheduling.
Non-preemptive (SJF) scheduling
• As an example of SJF scheduling, consider the following set
of processes, with the length of the CPU burst given in
milliseconds:

• average waiting time is (3 + 16 + 9 + 0)/4 = 7


milliseconds
Prediction of the length of the next CPU
burst
• For short-term CPU scheduling, there is no way to find length of
the next CPU burst.
• One approach is to try to approximate SJF scheduling, assuming
that the next CPU burst will be similar in length to the previous
ones, for instance.
• The next CPU burst is generally predicted as an exponential
average of the measured lengths of previous CPU bursts. Let tn
be the length of the nth CPU burst and let τn+1 be our predicted
value for the next CPU burst.

τn+1= α tn + (1- α) τn
• where, 0 ≤ α ≤ 1.
Preemptive (SJF) scheduling
• Preemptive SJF scheduling is sometimes called shortest-
remaining-time-first scheduling

• Average waiting time = [(10 − 1) + (1 − 1) + (17 − 2) + (5 − 3)]/4 =


26/4 = 6.5 milliseconds.
• Non-preemptive SJF scheduling would result in an average waiting
time of 7.75 milliseconds.
Priority Scheduling
• The SJF algorithm is a special case of the
general priority-scheduling algorithm.
• A priority is associated with each process, and
the CPU is allocated to the process with the
highest priority.
• Equal-priority processes are scheduled in FCFS
order.
• low numbers represent high priority
Priority Scheduling

• The average waiting time is 8.2 milliseconds.


Drawback of Priority Scheduling
• A major problem with priority scheduling
algorithms is indefinite blocking, or starvation.
• higher-priority processes can prevent a low-
priority process from ever getting the CPU
• A Solution to the problem of indefinite blockage
of low-priority processes is aging.
• Aging involves gradually increasing the priority of
processes that wait in the system for a long time.
Round-Robin Scheduling
• The round-robin (RR) scheduling algorithm is
designed especially for timesharing systems.
• It is similar to FCFS scheduling, but preemption
is added to enable the system to switch
between processes.
• A small unit of time, called a time quantum or
time slice, is defined.
• A time quantum is generally from 10 to 100
milliseconds in length.
Round-Robin Scheduling
• To implement RR scheduling, we again treat
the ready queue as a FIFO queue of processes.
• New processes are added to the tail of the
ready queue.
• The CPU scheduler picks the first process from
the ready queue,
• sets a timer to interrupt after 1 time quantum,
and dispatches the process.
Round-Robin Scheduling
• In following example a time quantum of 4
milliseconds

• P1 waits for 6 milliseconds (10 - 4), P2 waits for 4


milliseconds, and P3 waits for 7 milliseconds.
• Thus, the average waiting time is 17/3 = 5.66 milliseconds.
Critical Section Problem
• The critical section problem refers to the
problem of how to ensure that at most one
process is executing its critical section at a
given time.
• Critical Section: A piece of code in a
cooperating process in which the process may
updates shared data (variable, file, database,
etc.).
Requirements for Solution to the Critical
Section Problem
• A solution to the critical section problem must satisfy the
following three requirements:
1. Mutual Exclusion: If process is executing in its critical section, then
no other process can be executing in their critical section.
2. Progress: If no process is executing in its critical section and some
processes wish to enter their critical sections, then only those
processes that are not executing in their remainder section can
participate in the decision on which will enter its critical section
next, and this selection cannot be postponed indefinitely.
3. Bounded Waiting: There exists a bound on the number of times
that other processes are allowed to enter their critical sections
after a process has made a request to enter its critical section and
before that request is granted.
Peterson’s Solution
(a software based solution)
• Peterson’s solution is restricted to two processes that
alternate execution between their critical sections
and remainder sections.
• The processes are numbered P0 and P1.
• For convenience, when presenting Pi , we use Pj to
denote the other process; that is, j equals 1 − i.
• Peterson’s solution requires the two processes to
share two data items:
int turn;
boolean flag[2];
Peterson’s Solution
Peterson’s Solution
• The variable turn indicates whose turn it is to
enter its critical section.
– That is, if turn == i, then process Pi is allowed to
execute in its critical section.
• The flag array is used to indicate if a process is
ready to enter its critical section.
– For example, if flag[i] is true, this value indicates
that Pi is ready to enter its critical section.
Peterson’s Solution
• To prove property 1, we note that each Pi enters its
critical section only
– if either flag[j] == false or turn == i
• Pi can be prevented from entering the critical section
only if it is stuck in the while loop with the condition
– flag[j] == true and turn == j; this loop is the only one
possible.
• If Pj is not ready to enter the critical section, then
flag[j] == false, and Pi can enter its critical section.
Peterson’s Solution
• If Pj has set flag[j] to true and is also executing in its
while statement, then either turn == i or turn == j.
• If turn == i, then Pi will enter the critical section. If
turn == j, then Pj will enter the critical section.
• once Pj exits its critical section, it will reset flag[j] to
false, allowing Pi to enter its critical section.
• If Pj resets flag[j] to true, it must also set turn to i.
Peterson’s Solution
• Thus, since Pi does not change the value of the
variable turn while executing the while
statement, Pi will enter the critical section
(progress) after at most one entry by Pj
(bounded waiting).
Peterson’s Solution
• In this solution all the conditions of the critical
section problem have been satisfied:
1. Mutual exclusion is preserved.
2. The progress requirement is satisfied.
3. The bounded-waiting requirement is met.
Deadlocks
• A deadlocked state occurs when two or more
processes are waiting indefinitely for an event
that can be caused only by one of the waiting
processes.
Necessary Conditions
• A deadlock can occur only if four necessary
conditions hold simultaneously in the system
1. Mutual exclusion: At least one resource must be
held in a nonsharable mode; that is, only one process
at a time can use the resource. If another process
requests that resource, the requesting process must
be delayed until the resource has been released.
2. Hold and wait: A process must be holding at least
one resource and waiting to acquire additional
resources that are currently being held by other
processes.
Necessary Conditions
• 3. No preemption: Resources cannot be
preempted; that is, a resource can be released
only voluntarily by the process holding it, after
that process has completed its task.
• 4. Circular wait: A set {P0, P1, ..., Pn} of waiting
processes must exist such that P0 is waiting for a
resource held by P1, P1 is waiting for a resource
held by P2, ..., Pn−1 is waiting for a resource held
by Pn, and Pn is waiting for a resource held by P0.
Methods for Handling Deadlocks
• The deadlock problem can be dealt in one of three ways:
1. We can use a protocol to prevent or avoid deadlocks,
ensuring that the system will never enter a deadlocked state.
2. We can allow the system to enter a deadlocked state, detect
it, and recover.
3. We can ignore the problem altogether and pretend that
deadlocks never occur in the system.
• The third solution is the one used by most operating
systems, including Linux and Windows.
• It is then up to the application developer to write
programs that handle deadlocks.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy