OS-Unit-2sasipdf
OS-Unit-2sasipdf
System call offers the services of the operating system to the user programs
via API (Application Programming Interface). System calls are the only entry points for the
kernel system.
In Operating System, program can be executed in two modes.
1. User Mode
2. Kernel Mode.
Functions:
Functions:
Create a file
Delete file
Open and close file
Read, write, and reposition
Get and set file attributes
Device Management
Device management does the job of device manipulation like reading from device buffers,
writing into device buffers, etc.
Functions:
Functions:
Communication:
These types of system calls are specially used for interprocess communications.
Functions:
• File management. These programs create, delete, copy, rename, print, dump, list, and
generally manipulate files and directories.
• Status information. Some programs simply ask the system for the date, time, amount of
available memory or disk space, number of users, or similar status information. Others are
more complex, providing detailed performance, logging, and debugging information.
Typically, these programs format and print the output to the terminal or other output
devices or files or display it in a window of the GUI. Some systems also support a registry,
which is used to store and retrieve configuration information.
• File modification. Several text editors may be available to create and modify the content
of files stored on disk or other storage devices. There may also be special commands to
search contents of files or perform transformations of the text.
• Programming-language support. Compilers, assemblers, debuggers, and interpreters
for common programming languages (such as C, C++, Java, and PERL) are often provided
with the operating system or available as a separate download.
• Program loading and execution. Once a program is assembled or compiled, it must be
loaded into memory to be executed. The system may provide absolute loaders, relocatable
loaders, linkage editors, and overlay loaders. Debugging systems for either higher-level
languages or machine language are needed as well.
• Communications. These programs provide the mechanism for creating virtual
connections among processes, users, and computer systems. They allow users to send
messages to one another’s screens, to browse Web pages, to send e-mail messages, to log in
remotely, or to transfer files from one machine to another.
• Background services. All general-purpose systems have methods for launching certain
system-program processes at boot time. Some of these processes terminate after
completing their tasks, while others continue to run until the system is halted. Constantly
running system-program processes are known as services, subsystems, or daemons.
OS GENERATION:
It is possible to design, code, and implement an operating system specifically for one
machine at one site. More commonly, however, operating systems are designed to run on
any of a class of machines at a variety of sites with a variety of peripheral configurations.
The system must then be configured or generated for each specific computer site, a process
sometimes known as system generation SYSGEN.
The operating system is normally distributed on disk, on CD-ROM or DVD-ROM, or as an
“ISO” image, which is a file in the format of a CD-ROM or DVD-ROM. To generate a system,
we use a special program. This SYSGEN program reads from a given file, or asks the
operator of the system for information concerning the specific configuration of the
hardware system, or probes the hardware directly to determine what components are
there. The following kinds of information must be determined.
Once this information is determined, it can be used in several ways. At one extreme, a
system administrator can use it to modify a copy of the source code of the operating
system. The operating system then is completely compiled. Data declarations,
initializations, and constants, along with conditional compilation, produce an output-object
version of the operating system that is tailored to the system described.
SYSTEM BOOT:
The procedure of starting a computer by loading the kernel is known as booting the system.
On most computer systems, a small piece of code known as the bootstrap program or
bootstrap loader locates the kernel, loads it into main memory, and starts its execution.
Some computer systems, such as PCs, use a two-step process in which a simple bootstrap
loader fetches a more complex boot program from disk, which in turn loads the kernel.
Some systems—such as cellular phones, tablets, and game consoles—store the entire
operating system in ROM. Storing the operating system in ROM is suitable for small
operating systems, simple supporting hardware, and rugged operation. A problem with this
approach is that changing the bootstrap code requires changing the ROM hardware chips.
Some systems resolve this problem by using erasable programmable read-only memory
For large operating systems for systems that change frequently, the bootstrap loader is
stored in firmware, and the operating system is on disk. In this case, the bootstrap runs
diagnostics and has a bit of code that can read a single block at a fixed location (say block
zero) from disk into memory and execute the code from that boot block. The program
stored in the boot block may be sophisticated enough to load the entire operating system
into memory and begin its execution. More typically, it is simple code (as it fits in a single
disk block) and knows only the address on disk and length of the remainder of the
bootstrap program. GRUB is an example of an open-source bootstrap program for Linux
systems. All of the disk-bound bootstrap, and the operating system itself, can be easily
changed by writing new versions to disk. A disk that has a boot partition is called a boot
disk or system disks
Process Concept
A question that arises in discussing operating systems involves what to call all the
CPU activities. A batch system executes jobs, whereas a timeshared system has user
programs or tasks. Even on a single user system such as Microsoft Windows, a user may be
able to run several programs at one time: a word processor, a web browser and an e-mail
package. And even if the user can execute only one program at a time, the operating system
may need to support its own internal programmed activities, such as memory
management. In many respects, all these activities are similar, so we call all of
them processes.
The terms Job and process are used almost interchangeably in this text. Although we
personally prefer the term process, much of operat1ng-system theory and terminology was
developed during a time when the major activity of operating systems was job processing.
It would be misleading to avoid the use of commonly accepted terms that include the
word job (such as job scheduling) simply because process has superseded job.
The Process
Informally, as mentioned earlier, a process is a program in execution. A process is
more than the program code, which is sometimes known as the text section. It also includes
the current activity, as represented by the value of the program counter and the contents of
the processor's registers. A process generally also includes the process stack, which
contains temporary data (such as function parameters, return addresses, and local heap,
which is memory that is dynamically allocated during process run time.
As a process executes, it changes state. The state of a process is defined in part by the
current activity of that process. Each process may be in one of the following states:
Waiting: The process is waiting for some event to occur (such as an I/O Completion or
reception of a signal).
These names are arbitrary, and they vary across operating systems. The states that
they represent are found on all systems, however. Certain operating systems also more
finely delineate process states. It is important to realize that only one process can
be running on any processor at any instant. Many processes may
be ready and waiting, however.
Process state: The state may be new, ready running, waiting, halted, and so on.
Program counter: The counter indicates the address of the next instruction to be
executed for this process.
CPU registers: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and
general-purpose registers, plus any condition-code information. Along with the
In brief the PCB simply serves as the repository for any information that may vary from
process to process.
Process Scheduling:
Process Scheduling is an OS task that schedules processes of different states like ready,
waiting, and running.
Process scheduling allows OS to allocate a time interval of CPU execution for each process.
Another important reason for using a process scheduling system is that it keeps the CPU
busy all the time. This allows you to get the minimum response time for programs.
Process Scheduling Queues help you to maintain a distinct queue for each and every
process states and PCBs. All the process of the same execution state are placed in the same
queue. Therefore, whenever the state of a process is modified, its PCB needs to be unlinked
from its existing queue, which moves back to the new state queue.
Job queue – It helps you to store all the processes in the system.
Ready queue – This type of queue helps you to set every process residing in the main
memory, which is ready and waiting to execute.
Device queues – It is a process that is blocked because of the absence of an I/O device.
1. Every new process first put in the Ready queue .It waits in the ready queue until it
is finally processed for execution. Here, the new process is put in the ready queue
and wait until it is selected for execution or it is dispatched.
2. One of the processes is allocated the CPU and it is executing
3. The process should issue an I/O request
4. Then, it should be placed in the I/O queue.
5. The process should create a new sub process
6. The process should be waiting for its termination.
7. It should remove forcefully from the CPU, as a result interrupt. Once interrupt is
completed, it should be sent back to ready queue.
1. Running State
2. Not Running State
In the Operating system, whenever a new process is built, it is entered into the system,
which should be running.
Not Running
The process that are not running are kept in a queue, which is waiting for their turn to
execute. Each entry in the queue is a point to a specific process.
Scheduling Objectives
A scheduler is a type of system software that allows you to handle process scheduling.
Long term scheduler is also known as a job scheduler. This scheduler regulates the
program and select process from the queue and loads them into memory for execution. It
also regulates the degree of multi-programing.
However, the main goal of this type of scheduler is to offer a balanced mix of jobs, like
Processor, I/O jobs., that allows managing multiprogramming.
Short term scheduling is also known as CPU scheduler. The main goal of this scheduler is to
boost the system performance according to set criteria. This helps you to select from a
group of processes that are ready to execute and allocates CPU to one of them. The
dispatcher gives control of the CPU to the process selected by the short term scheduler.
Save the context of the process that is currently running on the CPU. Update the
process control block and other important fields.
Move the process control block of the above process into the relevant queue such as
the ready queue, I/O queue etc.
Select a new process for execution.
Update the process control block of the selected process. This includes updating the
process state to running.
Update the memory management data structures as required.
Restore the context of the process that was previously running when it is loaded
again on the processor. This is done by loading the previous values of the process
control block and registers.
1. Modes of execution
2. Process creation
3. Terminate the process
Modes of execution
Creation of Process:
Operating system creates a process with specified or default attributes and identifiers. A
process creates several sub processes.
Two names are used in the process. They are parent process and child process.
Every new process creates another process forming a tree-like structure. It can be
identified with a unique process identifier that usually represents it as pid which is
typically an integer number.
Every process needs some resources like CPU time, memory, file, I/O devices to
accomplish.
Whenever a process creates a sub process, and may be each sub process is able to
obtain its resources directly from the operating system or from the resources of the
parent process.
The parent process needs to partition its resources among all its children or it may
be able to share some resources to several children
The parent waits till some or all its children have terminated.
There are two more possibilities in terms of address space of the new process, which are as
follows −
Termination of a process
Process termination occurs when the process is terminated The exit() system call is used
by most operating systems for process termination.
A process may be terminated after its execution is naturally completed. This process
leaves the processor and releases all its resources.
A child process may be terminated if its parent process requests for its termination.
A process can be terminated if it tries to use a resource that it is not allowed to. For
example - A process can be terminated for trying to write into a read only file.
If an I/O failure occurs for a process, it can be terminated. For example - If a process
requires the printer and it is not working, then the process will be terminated.
In most cases, if a parent process is terminated then its child processes are also
terminated. This is done because the child process cannot exist without the parent
process.
If a process requires more memory than is currently available in the system, then it
is terminated because of memory scarcity.
1. Independent process.
2. Co-operating process.
1. Shared Memory
2. Message passing
Shared Memory:
Process1 generate information about certain computations or resources being used and
keeps it as a record in shared memory. When process2 needs to use the shared
information, it will check in the record stored in shared memory and take note of the
information generated by process1 and act accordingly. Processes can use shared memory
for extracting information as a record from another process as well as for delivering any
specific information to other processes.
In this method, processes communicate with each other without using any kind of shared
memory. If two processes p1 and p2 want to communicate with each other, they proceed as
follows:
Establish a communication link (if a link already exists, no need to establish it again.)
2. Addressing.
3. Buffering.
Synchronization:
Blocking send and blocking receive: Both the sender and receiver are blocked
until the message is delivered. This is called Rendezvous. This combination allows
tight synchronization
Non-blocking send and Blocking receive: Sender may continue on, the receiver is
blocked until the requested message arrives. A process that must receive a message
before it can do useful work needs to be blocked until message arrives.
Addressing (naming):
The process that want to communicate must have a way to refer each other. The
various schemes for specifying process in send and receive primitives are of two types.
2. Indirect Communication
Direct Communication:
In this scheme, the send and receive primitives are defined as:
3. link is associated with exactly two processes. Exactly one link exists between each
pair of processes.
Indirect Communication:
With indirect communication, the messages are sent to and received from
mailboxes, or ports.
A mailbox can be viewed abstractly as an object into which messages can be placed
by processes and from which messages can be removed. Each mailbox has a unique
identification. In this scheme, a process can communicate with some other process
via a number of different mailboxes.
• Two processes can communicate only if they share a mailbox. The send and receive
primitives are defined as follows:
1. A link is established between a pair of processes only if both members of the pair have a
shared mailbox.
3. A number of different links may exist between each pair of communicating processes,
with each link corresponding to one mailbox.
Buffering :
Zero capacity: The queue has maximum length 0; thus, the link cannot have any
messages waiting in it. In this case, the sender must block until the recipient
receives the message. The zero-capacity case is sometimes referred to as a message
system with no buffering;
Bounded capacity: The queue has finite length n thus, at most n messages can
reside in it. If the queue is not full when a new message is sent, the latter is placed in
the queue (either the message is copied or a pointer to the message is kept), and the
sender can continue execution without waiting. The link has a finite capacity,
however. If the link is full, the sender must block until space is available in the
queue. It is also known as automatic buffering.
Unbounded capacity: The queue has potentially infinite length; thus, any number
of messages can wait in it. The sender never blocks. It is also known as automatic
buffering.
Message Format:
The typical message format for operating system that supports variable
length of messages and fixed length of messages. Fixed length of messages minimize
processing and storage overhead.
1. Header
2. Body
Message Format
THREADS
A thread is a flow of execution through the process code, with its own program counter
that keeps track of which instruction to execute next, system registers which hold its
current working variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment
and open files. When one thread alters a code segment memory item, all other threads see
that.
A thread is also called a lightweight process. Threads provide a way to improve
application performance through parallelism. Threads represent a software approach to
improving performance of operating system by reducing the overhead thread is
equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a process. Each
thread represents a separate flow of control. Threads have been successfully used in
implementing network servers and web server. They also provide a suitable foundation
for parallel execution of applications on shared memory multiprocessors. The following
figure shows the working of a single-threaded and a multithreaded process.
1 Process is heavy weight or resource intensive. Thread is light weight, taking lesser
resources than a process.
2 Process switching needs interaction with Thread switching does not need to interact
operating system. with operating system.
3 In multiple processing environments, each All threads can share same set of open files,
process executes the same code but has its child processes.
own memory and file resources.
4 If one process is blocked, then no other While one thread is blocked and waiting, a
process can execute until the first process is second thread in the same task can run.
unblocked.
5 Multiple processes without using threads use Multiple threaded processes use fewer
more resources. resources.
6 In multiple processes each process operates One thread can read, write or change
independently of the others. another thread's data.
Types of Thread
In this case, the thread management kernel is not aware of the existence of threads. The
thread library contains code for creating and destroying threads, for passing message and
data between threads, for scheduling thread execution and for saving and restoring thread
contexts. The application starts with a single thread.
Advantages
Disadvantages
In this case, thread management is done by the Kernel. There is no thread management
code in the application area. Kernel threads are supported directly by the operating
system. Any application can be programmed to be multithreaded. All of the threads within
an application are supported within a single process.
The Kernel maintains context information for the process as a whole and for individuals
threads within the process. Scheduling by the Kernel is done on a thread basis. The Kernel
performs thread creation, scheduling and management in Kernel space. Kernel threads are
generally slower to create and manage than the user threads.
Advantages
Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.
If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.
Kernel routines themselves can be multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the user threads.
Transfer of control from one thread to another within the same process requires a
mode switch to the Kernel.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread
facility. Solaris is a good example of this combined approach. In a combined system,
multiple threads within the same application can run in parallel on multiple processors
and a blocking system call need not block the entire process. Multithreading models are
three types
The many-to-many model multiplexes any number of user threads onto an equal or
smaller number of kernel threads.
The following diagram shows the many-to-many threading model where 6 user level
threads are multiplexing with 6 kernel level threads. In this model, developers can create
as many user threads as necessary and the corresponding Kernel threads can run in
parallel on a multiprocessor machine. This model provides the best accuracy on
concurrency and when a thread performs a blocking system call, the kernel can schedule
another thread for execution.
Many-to-one model maps many user level threads to one Kernel-level thread. Thread
management is done in user space by the thread library. When thread makes a blocking
system call, the entire process will be blocked. Only one thread can access the Kernel at a
time, so multiple threads are unable to run in parallel on multiprocessors.
If the user-level thread libraries are implemented in the operating system in such a way
that the system does not support them, then the Kernel threads use the many-to-one
relationship modes.
There is one-to-one relationship of user-level thread to the kernel-level thread. This model
provides more concurrency than the many-to-one model. It also allows another thread to
run when a thread makes a blocking system call. It supports multiple threads to execute in
parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding Kernel
thread. OS/2, windows NT and windows 2000 use one to one relationship model.
1 User-level threads are faster to create Kernel-level threads are slower to create and
and manage. manage.
3 User-level thread is generic and can run Kernel-level thread is specific to the operating
on any operating system. system.
A thread library provides the programmer with an Application program interface for
creating and managing thread.
There are two primary ways of implementing thread library, which are as follows −
The first approach is to provide a library entirely in user space with kernel support. All
code and data structures for the library exist in a local function call in user space and not in
a system call.
The second approach is to implement a kernel level library supported directly by the
operating system. In this case the code and data structures for the library exist in kernel
space.
Invoking a function in the application program interface for the library typically results in a
system call to the kernel.
The main thread libraries which are used are given below −
POSIX threads − Pthreads, the threads extension of the POSIX standard, may be provided
as either a user level or a kernel level library.
WIN 32 thread − The windows thread library is a kernel level library available on
windows systems.
JAVA thread − The JAVA thread API allows threads to be created and managed directly as
JAVA programs.
Thread Issues:
some of the issues to consider in designing multithreaded programs. These issued are as
follows −
The fork() is used to create a duplicate process. The meaning of the fork() and exec()
system calls change in a multithreaded program.
If one thread in a program which calls fork(), does the new process duplicate all threads, or
is the new process single-threaded? If we take, some UNIX systems have chosen to have
two versions of fork(), one that duplicates all threads and another that duplicates only the
thread that invoked the fork() system call.
Signal Handling
Generally, signal is used in UNIX systems to notify a process that a particular event has
occurred. A signal received either synchronously or asynchronously, based on the source of
and the reason for the event being signalled.
All signals, whether synchronous or asynchronous, follow the same pattern as given below
Cancellation
For example − If multiple database threads are concurrently searching through a database
and one thread returns the result the remaining threads might be cancelled.
A target thread is a thread that is to be cancelled, cancellation of target thread may occur in
two different scenarios −
Thread polls
The amount of time required to create the thread prior to serving the request together with
the fact that this thread will be discarded once it has completed its work.
If all concurrent requests are allowed to be serviced in a new thread, there is no bound on
the number of threads concurrently active in the system. Unlimited thread could exhaust
system resources like CPU time or memory.
A thread pool is to create a number of threads at process start-up and place them into a
pool, where they sit and wait for work.