0% found this document useful (0 votes)
7 views

OS-Unit-2sasipdf

Pdf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

OS-Unit-2sasipdf

Pdf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Unit - 2

 System calls and Types of System calls


 System Programs
 OS Structures(Refer UNIT-1)
 OS Generations
 System boot
 Process concept
 Scheduling
 Operations on process
 Inter Process Communications
 Multi Threading Modals
 Thread libraries
 Threading issues

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 1


System Calls:
A system call is a mechanism that provides the interface between a process and the
operating system. It is a programmatic method in which a computer program requests a
service from the kernel of the OS.

System call offers the services of the operating system to the user programs
via API (Application Programming Interface). System calls are the only entry points for the
kernel system.
In Operating System, program can be executed in two modes.
1. User Mode
2. Kernel Mode.

Architecture of System calls

Steps for System Call


 Step 1) The processes executed in the user mode till the time a system call
interrupts it.
 Step 2) After that, the system call is executed in the kernel-mode on a priority basis.
 Step 3) Once system call execution is over, control returns to the user mode.,
 Step 4) The execution of user processes resumed in Kernel mode.

Types of System Calls


There are the five types of system calls used in OS:
 Process Control
 File Management
 Device Management
 Information Maintenance
 Communications

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 2


Process Control
This system calls perform the task of process creation, process termination, etc.

Functions:

 End and Abort


 Load and Execute
 Create Process and Terminate Process
 Wait and Signal Event
 Allocate and free memory
File Management
File management system calls handle file manipulation jobs like creating a file, reading, and
writing, etc.

Functions:

 Create a file
 Delete file
 Open and close file
 Read, write, and reposition
 Get and set file attributes
Device Management
Device management does the job of device manipulation like reading from device buffers,
writing into device buffers, etc.

Functions:

 Request and release device


 Logically attach/ detach devices
 Get and Set device attributes
Information Maintenance
It handles information and its transfer between the OS and the user program.

Functions:

 Get or set time and date


 Get process and device attributes

Communication:
These types of system calls are specially used for interprocess communications.

Functions:

 Create, delete communications connections


 Send, receive message
 Help OS to transfer status information
 Attach or detach remote devices

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 3


System Programs:
A modern system is its collection of system programs and the lowest level is hardware. The
operating system, then the system programs, and finally the application programs. System
programs, also known as system utilities, provide a convenient environment for program
development and execution. Some of them are simply user interfaces to system calls.
Others are considerably more complex. They can be divided into these categories:

• File management. These programs create, delete, copy, rename, print, dump, list, and
generally manipulate files and directories.
• Status information. Some programs simply ask the system for the date, time, amount of
available memory or disk space, number of users, or similar status information. Others are
more complex, providing detailed performance, logging, and debugging information.
Typically, these programs format and print the output to the terminal or other output
devices or files or display it in a window of the GUI. Some systems also support a registry,
which is used to store and retrieve configuration information.
• File modification. Several text editors may be available to create and modify the content
of files stored on disk or other storage devices. There may also be special commands to
search contents of files or perform transformations of the text.
• Programming-language support. Compilers, assemblers, debuggers, and interpreters
for common programming languages (such as C, C++, Java, and PERL) are often provided
with the operating system or available as a separate download.
• Program loading and execution. Once a program is assembled or compiled, it must be
loaded into memory to be executed. The system may provide absolute loaders, relocatable
loaders, linkage editors, and overlay loaders. Debugging systems for either higher-level
languages or machine language are needed as well.
• Communications. These programs provide the mechanism for creating virtual
connections among processes, users, and computer systems. They allow users to send
messages to one another’s screens, to browse Web pages, to send e-mail messages, to log in
remotely, or to transfer files from one machine to another.
• Background services. All general-purpose systems have methods for launching certain
system-program processes at boot time. Some of these processes terminate after
completing their tasks, while others continue to run until the system is halted. Constantly
running system-program processes are known as services, subsystems, or daemons.

OS GENERATION:
It is possible to design, code, and implement an operating system specifically for one
machine at one site. More commonly, however, operating systems are designed to run on
any of a class of machines at a variety of sites with a variety of peripheral configurations.
The system must then be configured or generated for each specific computer site, a process
sometimes known as system generation SYSGEN.
The operating system is normally distributed on disk, on CD-ROM or DVD-ROM, or as an
“ISO” image, which is a file in the format of a CD-ROM or DVD-ROM. To generate a system,
we use a special program. This SYSGEN program reads from a given file, or asks the
operator of the system for information concerning the specific configuration of the
hardware system, or probes the hardware directly to determine what components are
there. The following kinds of information must be determined.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 4


1. What CPU is to be used? What options (extended instruction sets, floatingpoint
arithmetic, and so on) are installed? For multiple CPU systems, each CPU may be
described.
2. How will the boot disk be formatted? How many sections, or “partitions,” will it be
separated into, and what will go into each partition?
3. How much memory is available? Some systems will determine this value themselves
by referencing memory location after memory location until an “illegal address”
fault is generated. This procedure defines the final legal address and hence the
amount of available memory.
4. What devices are available? The system will need to know how to address each
device (the device number), the device interrupt number, the device’s type and
model, and any special device characteristics.
5. What operating-system options are desired, or what parameter values are to be
used? These options or values might include how many buffers of which sizes
should be used, what type of CPU-scheduling algorithm is desired, what the
maximum number of processes to be supported is, and so on.

Once this information is determined, it can be used in several ways. At one extreme, a
system administrator can use it to modify a copy of the source code of the operating
system. The operating system then is completely compiled. Data declarations,
initializations, and constants, along with conditional compilation, produce an output-object
version of the operating system that is tailored to the system described.

SYSTEM BOOT:
The procedure of starting a computer by loading the kernel is known as booting the system.
On most computer systems, a small piece of code known as the bootstrap program or
bootstrap loader locates the kernel, loads it into main memory, and starts its execution.
Some computer systems, such as PCs, use a two-step process in which a simple bootstrap
loader fetches a more complex boot program from disk, which in turn loads the kernel.

When a CPU receives a reset event—for instance, when it is powered up or rebooted—the


instruction register is loaded with a predefined memory location, and execution starts
there. At that location is the initial bootstrap program. This program is in the form of read-
only memory (ROM), because the RAM is in an unknown state at system startup. ROM is
convenient because it needs no initialization and cannot easily be infected by a computer
virus. The bootstrap program can perform a variety of tasks. Usually, one task is to run
diagnostics to determine the state of the machine. If the diagnostics pass, the program can
continue with the booting steps. It can also initialize all aspects of the system, from CPU
registers to device controllers and the contents of main memory. Sooner or later, it starts
the operating system.

Some systems—such as cellular phones, tablets, and game consoles—store the entire
operating system in ROM. Storing the operating system in ROM is suitable for small
operating systems, simple supporting hardware, and rugged operation. A problem with this
approach is that changing the bootstrap code requires changing the ROM hardware chips.
Some systems resolve this problem by using erasable programmable read-only memory

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 5


(EPROM), which is readonly except when explicitly given a command to become writable.
All forms of ROM are also known as firmware, since their characteristics fall somewhere
between those of hardware and those of software. A problem with firmware in general is
that executing code there is slower than executing code in RAM

For large operating systems for systems that change frequently, the bootstrap loader is
stored in firmware, and the operating system is on disk. In this case, the bootstrap runs
diagnostics and has a bit of code that can read a single block at a fixed location (say block
zero) from disk into memory and execute the code from that boot block. The program
stored in the boot block may be sophisticated enough to load the entire operating system
into memory and begin its execution. More typically, it is simple code (as it fits in a single
disk block) and knows only the address on disk and length of the remainder of the
bootstrap program. GRUB is an example of an open-source bootstrap program for Linux
systems. All of the disk-bound bootstrap, and the operating system itself, can be easily
changed by writing new versions to disk. A disk that has a boot partition is called a boot
disk or system disks
Process Concept
A question that arises in discussing operating systems involves what to call all the
CPU activities. A batch system executes jobs, whereas a timeshared system has user
programs or tasks. Even on a single user system such as Microsoft Windows, a user may be
able to run several programs at one time: a word processor, a web browser and an e-mail
package. And even if the user can execute only one program at a time, the operating system
may need to support its own internal programmed activities, such as memory
management. In many respects, all these activities are similar, so we call all of
them processes.

The terms Job and process are used almost interchangeably in this text. Although we
personally prefer the term process, much of operat1ng-system theory and terminology was
developed during a time when the major activity of operating systems was job processing.
It would be misleading to avoid the use of commonly accepted terms that include the
word job (such as job scheduling) simply because process has superseded job.

The Process
Informally, as mentioned earlier, a process is a program in execution. A process is
more than the program code, which is sometimes known as the text section. It also includes
the current activity, as represented by the value of the program counter and the contents of
the processor's registers. A process generally also includes the process stack, which
contains temporary data (such as function parameters, return addresses, and local heap,
which is memory that is dynamically allocated during process run time.

We emphasize that a program by itself is not a process; a program is


a passive entity, such as a file containing a list of instructions stored on disk (often called an
executable file), whereas a process is an active entity, with a program counter specifying
the next instruction to execute and a set of associated resources. A program becomes a
process when an executable file is loaded into memory. Two common techniques for
loading executable files are double-clicking an icon representing the executable file and
entering the name of the executable file on the command line.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 6


Process State

As a process executes, it changes state. The state of a process is defined in part by the
current activity of that process. Each process may be in one of the following states:

New: The process is being created.

Running: Instructions are being executed.

Waiting: The process is waiting for some event to occur (such as an I/O Completion or
reception of a signal).

Ready. The process is waiting to be assigned to a processor.

Terminated: The process has finished execution.

These names are arbitrary, and they vary across operating systems. The states that
they represent are found on all systems, however. Certain operating systems also more
finely delineate process states. It is important to realize that only one process can
be running on any processor at any instant. Many processes may
be ready and waiting, however.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 7


Process Control Block

Each process is represented in the operating system by a process control block


(PCB) - also called a task control block. It contains many pieces of information associated
with a specific process, including these:

 Process state: The state may be new, ready running, waiting, halted, and so on.
 Program counter: The counter indicates the address of the next instruction to be
executed for this process.
 CPU registers: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and
general-purpose registers, plus any condition-code information. Along with the

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 8


program counter, this state information must be saved when an interrupt occurs, to
allow the process to be continued correctly afterward.
 CPU-scheduling information: This information includes a process priority,
pointers to scheduling queues, and any other scheduling parameters.
 Memory-management information: This information may include such
information as the value of the base and limit registers, the page tables, or the
segment tables, depending on the memory system used by the operating system.
 Accounting information: This information includes the amount of CPU and real
time used, time limits, account numbers, job or process numbers, and so on.
 I/O status information: This information includes the list of I/O devices allocated
to the process, a list of open files, and so on.

In brief the PCB simply serves as the repository for any information that may vary from
process to process.

Process Scheduling:

Process Scheduling is an OS task that schedules processes of different states like ready,
waiting, and running.

Process scheduling allows OS to allocate a time interval of CPU execution for each process.
Another important reason for using a process scheduling system is that it keeps the CPU
busy all the time. This allows you to get the minimum response time for programs.

Process Scheduling Queues

Process Scheduling Queues help you to maintain a distinct queue for each and every
process states and PCBs. All the process of the same execution state are placed in the same
queue. Therefore, whenever the state of a process is modified, its PCB needs to be unlinked
from its existing queue, which moves back to the new state queue.

Three types of operating system queues are:

Job queue – It helps you to store all the processes in the system.

Ready queue – This type of queue helps you to set every process residing in the main
memory, which is ready and waiting to execute.

Device queues – It is a process that is blocked because of the absence of an I/O device.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 9


Queuing Diagram of Process Scheduling

In the above-given Diagram,

 Rectangle represents a queue.


 Circle denotes the resource
 Arrow indicates the flow of the process.

1. Every new process first put in the Ready queue .It waits in the ready queue until it
is finally processed for execution. Here, the new process is put in the ready queue
and wait until it is selected for execution or it is dispatched.
2. One of the processes is allocated the CPU and it is executing
3. The process should issue an I/O request
4. Then, it should be placed in the I/O queue.
5. The process should create a new sub process
6. The process should be waiting for its termination.
7. It should remove forcefully from the CPU, as a result interrupt. Once interrupt is
completed, it should be sent back to ready queue.

Two State Process Model

Two-state process models are:

1. Running State
2. Not Running State

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 10


Running

In the Operating system, whenever a new process is built, it is entered into the system,
which should be running.

Not Running

The process that are not running are kept in a queue, which is waiting for their turn to
execute. Each entry in the queue is a point to a specific process.

State Transition diagram

Queuing Diagram for two process

Scheduling Objectives

Here, are important objectives of Process scheduling

 Maximize the number of interactive users within acceptable response times.


 Achieve a balance between response and utilization.
 Avoid indefinite postponement and enforce priorities.
 It also should give reference to the processes holding the key resources.

Type of Process Schedulers

A scheduler is a type of system software that allows you to handle process scheduling.

There are mainly three types of Process Schedulers:

1. Long Term Scheduler

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 11


2. Short Term Scheduler
3. Medium Term Scheduler

Long Term Scheduler

Long term scheduler is also known as a job scheduler. This scheduler regulates the
program and select process from the queue and loads them into memory for execution. It
also regulates the degree of multi-programing.

However, the main goal of this type of scheduler is to offer a balanced mix of jobs, like
Processor, I/O jobs., that allows managing multiprogramming.

Medium Term Scheduler

Medium-term scheduling is an important part of swapping. It enables you to handle the


swapped out-processes. In this scheduler, a running process can become suspended, which
makes an I/O request.

A running process can become suspended if it makes an I/O request. A suspended


processes can’t make any progress towards completion. In order to remove the process
from memory and make space for other processes, the suspended process should be
moved to secondary storage.

Short Term Scheduler

Short term scheduling is also known as CPU scheduler. The main goal of this scheduler is to
boost the system performance according to set criteria. This helps you to select from a
group of processes that are ready to execute and allocates CPU to one of them. The
dispatcher gives control of the CPU to the process selected by the short term scheduler.

Difference Between Longterm vs short term vs Medium term shedulers

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 12


Context Switch:
Context Switching involves storing the context or state of a process so that it can be
reloaded when required and execution can be resumed from the same point as earlier. This
is a feature of a multitasking operating system and allows a single CPU to be shared by
multiple processes.
A diagram that demonstrates context switching is as follows:
In the above diagram, initially Process 1 is running. Process 1 is switched out and Process 2
is switched in because of an interrupt or a system call. Context switching involves saving
the state of Process 1 into PCB1 and loading the state of process 2 from PCB2. After some
time again a context switch occurs and Process 2 is switched out and Process 1 is switched
in again. This involves saving the state of Process 2 into PCB2 and loading the state of
process 1 from PCB1.
Context Switching Triggers
There are three major triggers for context switching. These are given as follows:
 Multitasking: In a multitasking environment, a process is switched out of the CPU
so another process can be run. The state of the old process is saved and the state of
the new process is loaded. On a pre-emptive system, processes may be switched out
by the scheduler.
 Interrupt Handling: The hardware switches a part of the context when an
interrupt occurs. This happens automatically. Only some of the context is changed to
minimize the time required to handle the interrupt.
 User and Kernel Mode Switching: A context switch may take place when a
transition between the user mode and kernel mode is required in the operating
system.
Context Switching Steps
The steps involved in context switching are as follows:

 Save the context of the process that is currently running on the CPU. Update the
process control block and other important fields.
 Move the process control block of the above process into the relevant queue such as
the ready queue, I/O queue etc.
 Select a new process for execution.
 Update the process control block of the selected process. This includes updating the
process state to running.
 Update the memory management data structures as required.
 Restore the context of the process that was previously running when it is loaded
again on the processor. This is done by loading the previous values of the process
control block and registers.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 13


OPERATIONS ON PROCESS

There are three Operations are performed on process.

1. Modes of execution
2. Process creation
3. Terminate the process

Modes of execution

Most processors support at least two modes of execution.

1. User mode: less privileged mode/safe mode


2. Kernel model: privileged mode.
 Most of the user programs execute in user mode.
 Kernel mode is also called as system mode/ control mode.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 14


 There is a bit in the program status word (PSW) that indicates the mode
of execution.

Creation of Process:
Operating system creates a process with specified or default attributes and identifiers. A
process creates several sub processes.

Syntax: Create(ProcessID, attributes)

Two names are used in the process. They are parent process and child process.

 Every new process creates another process forming a tree-like structure. It can be
identified with a unique process identifier that usually represents it as pid which is
typically an integer number.
 Every process needs some resources like CPU time, memory, file, I/O devices to
accomplish.
 Whenever a process creates a sub process, and may be each sub process is able to
obtain its resources directly from the operating system or from the resources of the
parent process.
 The parent process needs to partition its resources among all its children or it may
be able to share some resources to several children

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 15


Whenever a process creates a new process, there are two possibilities in terms of
execution, which are as follows −

 The parent continues to execute concurrently with its children.

 The parent waits till some or all its children have terminated.

There are two more possibilities in terms of address space of the new process, which are as
follows −

 The child process is a duplicate of the parent process.

 The child process has a new program loaded into it.

Termination of a process

Process termination occurs when the process is terminated The exit() system call is used
by most operating systems for process termination.

Some of the causes of process termination are as follows −

 A process may be terminated after its execution is naturally completed. This process
leaves the processor and releases all its resources.

 A child process may be terminated if its parent process requests for its termination.

 A process can be terminated if it tries to use a resource that it is not allowed to. For
example - A process can be terminated for trying to write into a read only file.

 If an I/O failure occurs for a process, it can be terminated. For example - If a process
requires the printer and it is not working, then the process will be terminated.

 In most cases, if a parent process is terminated then its child processes are also
terminated. This is done because the child process cannot exist without the parent
process.

 If a process requires more memory than is currently available in the system, then it
is terminated because of memory scarcity.

Inter Process Communication:

Inter process communication (IPC) is a mechanism which allows processes to


communicate with each other and synchronize their actions.

A process can be of two types:

1. Independent process.
2. Co-operating process.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 16


An independent process is not affected by the execution of other processes while a co-
operating process can be affected by other executing processes. Though one can think that
those processes, which are running independently, will execute very efficiently, in reality,
there are many situations when co-operative nature can be utilized for increasing
computational speed, convenience, and modularity. Inter-process communication (IPC) is a
mechanism that allows processes to communicate with each other and synchronize their
actions. The communication between these processes can be seen as a method of co-
operation between them. Processes can communicate with each other through both:

1. Shared Memory
2. Message passing

Shared Memory:

Communication between processes using shared memory requires processes to share


some variable and it completely depends on how programmer will implement it. One way
of communication using shared memory can be imagined like this: Suppose process1 and
process2 are executing simultaneously and they share some resources or use some
information from another process.

Process1 generate information about certain computations or resources being used and
keeps it as a record in shared memory. When process2 needs to use the shared
information, it will check in the record stored in shared memory and take note of the
information generated by process1 and act accordingly. Processes can use shared memory
for extracting information as a record from another process as well as for delivering any
specific information to other processes.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 17


Message passing system:

In this method, processes communicate with each other without using any kind of shared
memory. If two processes p1 and p2 want to communicate with each other, they proceed as
follows:

Establish a communication link (if a link already exists, no need to establish it again.)

Start exchanging messages using basic primitives.

We need at least two primitives:


– send(message, destination) or send(message)
– receive(message, host) or receive(message)

Design characteristics of the Message passing system in IPC

1. Synchronization between the process.

2. Addressing.

3. Buffering.

4. Format of the Message.

Synchronization:

The communication of a message between two process implies synchronization


between the process. Sender and receiver can be blocked or non-blocked.

 Blocking send and blocking receive: Both the sender and receiver are blocked
until the message is delivered. This is called Rendezvous. This combination allows
tight synchronization

 Non-blocking send and Blocking receive: Sender may continue on, the receiver is
blocked until the requested message arrives. A process that must receive a message
before it can do useful work needs to be blocked until message arrives.

 Non-blocking send and Non-blocking receive: Sending process sends the


message and resumes the operation. Receiver retrieves either a valid message or a
null i.e. neither party is required to wait.

Addressing (naming):

The process that want to communicate must have a way to refer each other. The
various schemes for specifying process in send and receive primitives are of two types.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 18


1. Direct Communication

2. Indirect Communication

Direct Communication:

In this scheme, the send and receive primitives are defined as:

 Send(P, message): Send a message to process P.


 receive (Q, message)-Receive a message from process Q.

A communication link in this scheme has the following properties:

1. A link is established automatically between every pair of processes that want to


communicate.

2. The processes need to know only each other's identity to communicate.

3. link is associated with exactly two processes. Exactly one link exists between each

pair of processes.

Indirect Communication:

 With indirect communication, the messages are sent to and received from
mailboxes, or ports.

 A mailbox can be viewed abstractly as an object into which messages can be placed
by processes and from which messages can be removed. Each mailbox has a unique
identification. In this scheme, a process can communicate with some other process
via a number of different mailboxes.

• Two processes can communicate only if they share a mailbox. The send and receive
primitives are defined as follows:

1. send(A, message): Send a message to mailbox A.

2. receive(A, message): Receive a message from mailbox A.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 19


In this scheme, a communication link has the following properties:

1. A link is established between a pair of processes only if both members of the pair have a
shared mailbox.

2. A link may be associated with more than two processes.

3. A number of different links may exist between each pair of communicating processes,
with each link corresponding to one mailbox.

Buffering :

Whether the communication is direct or indirect, messages exchanged by


communicating processes reside in a temporary queue. Such a queue can be implemented
in three ways:

 Zero capacity: The queue has maximum length 0; thus, the link cannot have any
messages waiting in it. In this case, the sender must block until the recipient
receives the message. The zero-capacity case is sometimes referred to as a message
system with no buffering;

 Bounded capacity: The queue has finite length n thus, at most n messages can
reside in it. If the queue is not full when a new message is sent, the latter is placed in
the queue (either the message is copied or a pointer to the message is kept), and the
sender can continue execution without waiting. The link has a finite capacity,
however. If the link is full, the sender must block until space is available in the
queue. It is also known as automatic buffering.

 Unbounded capacity: The queue has potentially infinite length; thus, any number
of messages can wait in it. The sender never blocks. It is also known as automatic
buffering.

Message Format:

The typical message format for operating system that supports variable
length of messages and fixed length of messages. Fixed length of messages minimize
processing and storage overhead.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 20


The Message format is divided into two parts.

1. Header

2. Body

Message Format

THREADS
A thread is a flow of execution through the process code, with its own program counter
that keeps track of which instruction to execute next, system registers which hold its
current working variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment
and open files. When one thread alters a code segment memory item, all other threads see
that.
A thread is also called a lightweight process. Threads provide a way to improve
application performance through parallelism. Threads represent a software approach to
improving performance of operating system by reducing the overhead thread is
equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a process. Each
thread represents a separate flow of control. Threads have been successfully used in
implementing network servers and web server. They also provide a suitable foundation
for parallel execution of applications on shared memory multiprocessors. The following
figure shows the working of a single-threaded and a multithreaded process.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 21


Difference between Process and Thread

S.N. Process Thread

1 Process is heavy weight or resource intensive. Thread is light weight, taking lesser
resources than a process.

2 Process switching needs interaction with Thread switching does not need to interact
operating system. with operating system.

3 In multiple processing environments, each All threads can share same set of open files,
process executes the same code but has its child processes.
own memory and file resources.

4 If one process is blocked, then no other While one thread is blocked and waiting, a
process can execute until the first process is second thread in the same task can run.
unblocked.

5 Multiple processes without using threads use Multiple threaded processes use fewer
more resources. resources.

6 In multiple processes each process operates One thread can read, write or change
independently of the others. another thread's data.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 22


Advantages of Thread

 Threads minimize the context switching time.


 Use of threads provides concurrency within a process.
 Efficient communication.
 It is more economical to create and context switch threads.
 Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency.

Types of Thread

Threads are implemented in following two ways −


 User Level Threads − User managed threads.
 Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.

User Level Threads

In this case, the thread management kernel is not aware of the existence of threads. The
thread library contains code for creating and destroying threads, for passing message and
data between threads, for scheduling thread execution and for saving and restoring thread
contexts. The application starts with a single thread.

Advantages

 Thread switching does not require Kernel mode privileges.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 23


 User level thread can run on any operating system.
 Scheduling can be application specific in the user level thread.
 User level threads are fast to create and manage.

Disadvantages

 In a typical operating system, most system calls are blocking.


 Multithreaded application cannot take advantage of multiprocessing.

Kernel Level Threads

In this case, thread management is done by the Kernel. There is no thread management
code in the application area. Kernel threads are supported directly by the operating
system. Any application can be programmed to be multithreaded. All of the threads within
an application are supported within a single process.
The Kernel maintains context information for the process as a whole and for individuals
threads within the process. Scheduling by the Kernel is done on a thread basis. The Kernel
performs thread creation, scheduling and management in Kernel space. Kernel threads are
generally slower to create and manage than the user threads.

Advantages

 Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.
 If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.
 Kernel routines themselves can be multithreaded.

Disadvantages

 Kernel threads are generally slower to create and manage than the user threads.
 Transfer of control from one thread to another within the same process requires a
mode switch to the Kernel.

Multithreading Models

Some operating system provide a combined user level thread and Kernel level thread
facility. Solaris is a good example of this combined approach. In a combined system,
multiple threads within the same application can run in parallel on multiple processors
and a blocking system call need not block the entire process. Multithreading models are
three types

 Many to many relationship.


 Many to one relationship.
 One to one relationship.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 24


Many to Many Model

The many-to-many model multiplexes any number of user threads onto an equal or
smaller number of kernel threads.
The following diagram shows the many-to-many threading model where 6 user level
threads are multiplexing with 6 kernel level threads. In this model, developers can create
as many user threads as necessary and the corresponding Kernel threads can run in
parallel on a multiprocessor machine. This model provides the best accuracy on
concurrency and when a thread performs a blocking system call, the kernel can schedule
another thread for execution.

Many to One Model

Many-to-one model maps many user level threads to one Kernel-level thread. Thread
management is done in user space by the thread library. When thread makes a blocking
system call, the entire process will be blocked. Only one thread can access the Kernel at a
time, so multiple threads are unable to run in parallel on multiprocessors.
If the user-level thread libraries are implemented in the operating system in such a way
that the system does not support them, then the Kernel threads use the many-to-one
relationship modes.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 25


One to One Model

There is one-to-one relationship of user-level thread to the kernel-level thread. This model
provides more concurrency than the many-to-one model. It also allows another thread to
run when a thread makes a blocking system call. It supports multiple threads to execute in
parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding Kernel
thread. OS/2, windows NT and windows 2000 use one to one relationship model.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 26


Difference between User-Level & Kernel-Level Thread

S.N. User-Level Threads Kernel-Level Thread

1 User-level threads are faster to create Kernel-level threads are slower to create and
and manage. manage.

2 Implementation is by a thread library at Operating system supports creation of Kernel


the user level. threads.

3 User-level thread is generic and can run Kernel-level thread is specific to the operating
on any operating system. system.

4 Multi-threaded applications cannot Kernel routines themselves can be


take advantage of multiprocessing. multithreaded.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 27


Thread Library

A thread library provides the programmer with an Application program interface for
creating and managing thread.

Ways of implementing thread library

There are two primary ways of implementing thread library, which are as follows −

The first approach is to provide a library entirely in user space with kernel support. All
code and data structures for the library exist in a local function call in user space and not in
a system call.

The second approach is to implement a kernel level library supported directly by the
operating system. In this case the code and data structures for the library exist in kernel
space.

Invoking a function in the application program interface for the library typically results in a
system call to the kernel.

The main thread libraries which are used are given below −

POSIX threads − Pthreads, the threads extension of the POSIX standard, may be provided
as either a user level or a kernel level library.

WIN 32 thread − The windows thread library is a kernel level library available on
windows systems.

JAVA thread − The JAVA thread API allows threads to be created and managed directly as
JAVA programs.

Thread Issues:

some of the issues to consider in designing multithreaded programs. These issued are as
follows −

The fork() and exec() system calls

The fork() is used to create a duplicate process. The meaning of the fork() and exec()
system calls change in a multithreaded program.

If one thread in a program which calls fork(), does the new process duplicate all threads, or
is the new process single-threaded? If we take, some UNIX systems have chosen to have
two versions of fork(), one that duplicates all threads and another that duplicates only the
thread that invoked the fork() system call.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 28


If a thread calls the exec() system call, the program specified in the parameter to exec() will
replace the entire process which includes all threads.

Signal Handling

Generally, signal is used in UNIX systems to notify a process that a particular event has
occurred. A signal received either synchronously or asynchronously, based on the source of
and the reason for the event being signalled.

All signals, whether synchronous or asynchronous, follow the same pattern as given below

A signal is generated by the occurrence of a particular event.

The signal is delivered to a process.

Once delivered, the signal must be handled.

Cancellation

Thread cancellation is the task of terminating a thread before it has completed.

For example − If multiple database threads are concurrently searching through a database
and one thread returns the result the remaining threads might be cancelled.

A target thread is a thread that is to be cancelled, cancellation of target thread may occur in
two different scenarios −

 Asynchronous cancellation − One thread immediately terminates the target


thread.
 Deferred cancellation − The target thread periodically checks whether it should
terminate, allowing it an opportunity to terminate itself in an ordinary fashion.

Thread polls

Multithreading in a web server, whenever the server receives a request it creates a


separate thread to service the request. Some of the problems that arise in creating a thread
are as follows −

The amount of time required to create the thread prior to serving the request together with
the fact that this thread will be discarded once it has completed its work.

If all concurrent requests are allowed to be serviced in a new thread, there is no bound on
the number of threads concurrently active in the system. Unlimited thread could exhaust
system resources like CPU time or memory.

A thread pool is to create a number of threads at process start-up and place them into a
pool, where they sit and wait for work.

S.V.V.D.Venu Gopal,Dept of CSE, SASI INSTITUTE OF ENGINEERING & TEHCNOLOGY Page 29

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy