OS Notes Module 1, 2 and 3
OS Notes Module 1, 2 and 3
MODULE 1
Chapter-1
INTRODUCTION TO OPERATING
SYSTEM
Q) What is an Operating System?
An operating system is system software that acts as an intermediary between a user of a computer
and the computer hardware . It is software that manages the computer hardware and allows the user to
execute programs in a convenient and efficient manner.
User Views:-The user‘s view of the operating system depends on the type of user.
Personal Computer:
If the user is using standalone system, then OS is designed for
• Ease of use and high performances.
• Here resource utilization is not given importance.
System Views:- Operating system can be viewed as a resource allocator and control program.
Resource allocator –
• The OS acts as a manager of hardware and software resources.
• CPU time, memory space, file-storage space, I/O devices, shared files
• The OS assigns the resources to the requesting program depending on the
priority.
Control Program – The OS is a control program and manage the execution of user program to
prevent errors and improper use of the computer.
Interrupt handling –
• As we move down the hierarchy, the cost per bit generally decreases, whereas
the access time and the capacity of storage generally increases.
• In addition to differing in speed and cost, the various storage systems are either
volatile or nonvolatile.
• Volatile storage
• nonvolatile storage
Multi-Processor Systems
• These systems have two or more processors which can share:
→ bus → clock → memory/peripheral devices
Advantages:
1. Increased Throughput
➢ By increasing no. of processors, we expect to get more work done in less time.
2. Economy of Scale
➢ These systems are cheaper because they can share
→ peripherals → mass-storage → power-supply.
➢ If many programs operate on same data, they will be stored on one disk & all processors can
share them.
3. Increased Reliability
➢ The failure of one processor will not halt the system.
Two techniques to maintain ‗Increased Reliability‘ - graceful degradation & fault tolerant
1. Graceful degradation
2. Fault tolerant – When one processor fails, its operations are stopped, the system failure
is then detected, diagnosed, and corrected.
2. Symmetric Multiprocessing
• All processors are peers; no master-slave relationship exists between processors.
• Advantages:
1) Many processes can run simultaneously.
2) Processes and resources are shared dynamically among the various processors.
• Disadvantage:
1) Since CPUs are separate, one CPU may be sitting idle while another CPU is overloaded. This
results in inefficiencies.
2) Symmetric Clustering
• Two or more nodes are running applications, and are monitoring each other.
• Advantage:
1) This mode is more efficient, as it uses all of the available hardware.
• It does require that more than one application be available to run.
Q) Write a short note on: a)Batch Systems b) Multi-Programmed Systems c) Time-Sharing Systems
Batch Systems
• Early computers were physically enormous machines run froma console.
• The common input devices were card readers and tape drives.
• The common output devices were line printers, tape drives, and card punches.
• The user
→ prepared a job which consisted of the program, the data, and control information
→ submitted the job to the computer-operator.
• The job was usually in the formof punch cards.
• At some later time (after minutes, hours, or days), the output appeared.
• To speed up processing, operators batched together jobs with similar needs and ran them through
the computer as a group.
• Disadvantage:
. The CPU is often idle, because the speeds of the mechanical I/O devices.
Multi-Programmed Systems
• Multiprogramming increases CPU utilization by organizing jobs so that the CPU always has one to
execute.
• The idea is as follows:
1) OS keeps several jobs in memory simultaneously (Figure 1.8).
2) OS picks and begins to execute one of the jobs in the memory. Eventually, the job mayhave to wait for some
task, such as an I/O operation, to complete.
3) OS simply switches to, and executes, another job.
4) When that job needs to wait, the CPU is switched to another job, and so on.
5) As long as at least one job needs to execute, the CPU is never idle.
• If several jobs are ready to be brought into memory, and if there is not enough roomfor all of them,
then the system must choose among them. Making this decision is job scheduling.
• If several jobs are ready to run at the same time, the systemmust choose among them. Making this decision is CPU
scheduling.
• Working principle:
1) At system boot time, the hardware starts in kernel-mode.
2) The OS is then loaded and starts user applications in user-mode.
3) Whenever a trap or interrupt occurs, the hardware switches from user-mode to kernel-mode
(that is, changes the state of the mode bit to 0).
4) The system always switches to user-mode (by setting the mode bit to 1) before passing
control to a user-program.
• Dual mode protects
→ OS from errant users and
→ errant users from one another.
• Privileged instruction is executed only in kernel-mode.
• If an attempt is made to execute a privileged instruction in user-mode, the hardware treats it as illegal
and traps it to the OS.
• A system calls are called by user-program to ask the OS to perform the tasks on behalf of the user
Process Management
• The OS is responsible for the following activities:
1. Creating and deleting both user and system processes
2. Suspending and resuming processes
3. Providing mechanisms for process synchronization
4. Providing mechanisms for process communication
5. Providing mechanisms for deadlock handling
• A process needs following resources to do a task:
→ CPU
→ memory and
→ files.
• The resources are allocated to process
→ when the process is created or
→ while the process is running.
• When the process terminates, the OS reclaims all the reusable resources.
• A program by itself is not a process;
1) A program is a passive entity (such as the contents of a file stored on disk).
2) A process is an active entity.
• Two types of process:
1) Single-threaded process has one PC(program counter) which specifies location of the next
instruction to be executed.
2) Multi-threaded process has one PC per thread which specifies location of next instruction
to execute in each thread
Memory Management
• The OS is responsible for the following activities:
▪ Keeping track of which parts of memory are currently being used and by whom
▪ Deciding which processes are to be loaded into memory when memory space becomes available
▪ Allocating and de-allocating memory space as needed.
▪ Main memory is the array of bytes ranging from hundreds to billions.
Storage Management
File-System Management
Mass-Storage Management
Caching
File System Management
• The OS is responsible for following activities:
1) Creating and deleting files.
2) Creating and deleting directories.
3) Supporting primitives for manipulating files & directories.
4) Mapping files onto secondary storage.
5) Backing up files on stable (non-volatile) storage media.
• Computer stores information on different types of physical
media. For ex: magnetic disk, optical disk.
• Each medium is controlled by a device (e.g. disk drive).
• The OS
→ maps files onto physical media and
→ accesses the files via the storage devices
• File is a logical collection of related information.
• File consists of both program & data.
• Data files may be numeric, alphabets or binary.
• When multiple users have access to files, access control (read, write) must be specified.
Caching
• Caching is an important principle of computer systems.
• Information is normally kept in some storage system (such as main memory).
• As it is used, it is copied into a faster storage system called as the cache on a temporary basis.
• When we need a particular piece of information:
1. We first check whether the information is in the cache.
2. If information is in cache, we use the information directly from the cache.
3. If information is not in cache, we use the information from the source
putting a copy in the cache under the assumption that we will need it again soon.
• In addition, internal programmable registers, such as index registers, provide high-speed cache for
main memory.
• The compiler implements the register-allocation and register-replacement algorithms to decide which
information to keep in registers and which to keep in main memory.
• Most systems have an instruction cache to hold the instructions expected to be executed next.
• Most systems have one or more high-speed data caches in the memory hierarchy
• Because caches have limited size, cache management is an important design problem
Careful selection of cache size & of a replacement policy can result in greatly increased performance
I/O Systems
• A memory-management component that includes buffering, caching, and spooling.
• A general device-driver interface.
• Drivers for specific hardware devices.
• As it is used, it is copied into a faster storage system— the cache—as temporary data.
• If it is not in cache, we use the information from the source, putting a copy in the cache under
the assumption that we will need it again soon.
• Because caches have limited size, cache management is an important design problem.
• Careful selection of the cache size and page replacement policy can result in greatly increased
performance.
• In a hierarchical storage structure, the same data may appear in different levels of the storage
system.
• For example, suppose to retrieve an integer A from magnetic disk to the processing program.
The operation proceeds by first issuing an I/O operation to copy the disk block on which A
resides to main memory. This operation is followed by copying A to the cache and to an
internal register. Thus, the copy of A appears in several places: on the magnetic disk, in main
memory, in the cache, and in an internal register.
• In a multiprocessor environment, in addition to internal registers, each of the CPUs also
contains a local cache. In such an environment, a copy of A may exist simultaneously in
several caches.
• Since the various CPUs can all execute concurrently, any update done to the value of A in one
cache is immediately reflected in all other caches where A resides. This situation is called
cache coherency, and it is usually a hardware problem (handled below the operating-system
level).
Multimedia Systems
• Multimedia data consist of audio and video files as well as conventional files.
• These data differ from conventional data in that multimedia data must be delivered(streamed)
according to certain time restrictions.
• Multimedia describes a wide range of applications. These include
→ audio files such as MP3
→ DVD movies
→ video conferencing
→ live webcasts of speeches
Handheld Systems
• Handheld systems include
→ PDAs and
→ cellular telephones.
• Main challenge faced by developers of handheld systems: Limited size of devices.
• Because of small size, most handheld devices have a
→ small amount of memory,
→ slow processors, and
→ small display screens.
Traditional Computing
• Used in office environment:
➢ PCs connected to a network, with servers providing file and print services.
Client-Server Computing
• Servers can be broadly categorized as : 1) Compute servers and
2) File servers
1) Compute-server system provides an interface to which a client can send a request to perform an
action (for example, read data).
➢ In response, the server executes the action and sends back results to the client.
2) File -server system provides a file-system interface where clients can create, read, and delete files.
➢ For example: web server that delivers files to clients running web browsers.
Peer-to-Peer Computing
• All nodes are considered peers, and each may act as either a client or a server(Figure 1.4).
• Advantage:
In a client-server system, the server is a bottleneck;
but in a peer-to-peer system, services can be provided by several nodes distributed
throughout the network.
• A node must first join the network of peers.
• Determining what services are available is done in one of two generalways:
1) When a node joins a network, it registers its service with a centralized lookup service on the network.
➢ Any node desiring a specific service first contacts this centralized lookup service to determine which
node provides the service.
2) A peer broadcasts a request for the service to all other nodes in the network. The node (or nodes)
providing that service responds to the peer.
Chapter-2
Operating System Services
Q) What is OS? Explain OS services?
• An OS provides an environment for the execution of programs.
• It provides services to
1. users.
2. programs (System)
3. I/O Operations
➢ The OS must provide a means to do I/O operations because users cannot control I/O devices
directly.
➢ For specific devices, special functions may be desired (ex: to blank CRT screen).
6. Error Detection
➢ Errors may occur in
→ CPU & memory-hardware (ex: power failure)
→ I/O devices (ex: lack of paper in the printer) and
→ user program (ex: arithmetic overflow)
➢ For each type of error, OS should take appropriate action to ensure correct & consistent
computing.
1) Command Interpreter
• Main function:
To get and execute the next user-specified command
• The commands are used to manipulate files i.e. create, copy, print, execute, etc.
• Two general ways to implement:
1) Command interpreter itself contains code to execute command.
2) Commands are implemented through system programs. This is used by UNIX.
Process Control
• System calls used:
➢ end, abort
➢ load, execute
➢ create process, terminate process
➢ get process attributes, set process attributes
➢ wait for time
➢ wait event, signal event
➢ allocate and free memory
• A running program needs to be able to halt its execution either normally (end) or abnormally (abort).
• If program runs into a problem, error message may be generated and dumped into a file.
This file can be examined by a debugger to determine the cause of the problem.
• The OS must transfer control to the next invoking command interpreter.
➢ Command interpreter then reads next command.
➢ In interactive system, the command interpreter simply continues with next command.
➢ In GUI system, a pop-up window will request action from user.
How to deal with new process?
• A process executing one program can load and execute another program.
• Where to return control when the loaded program terminates?
The answer depends on the existing program:
1) If control returns to the existing program when the new program terminates, we must save the
memory image of the existing program. (Thus, we have effectively created a mechanism for one
program to call another program).
2) If both programs continue concurrently, we create d a new process to be multiprogrammed.
• We should be able to control the execution of a process. i.e. we should be able to determine and reset
the attributes of a process such as:
→ job's priority or
→ maximum execution time
• We may also want to terminate process that we created if we find that it
→ is incorrect or
→ is no longer needed.
• We may need to wait for processes to finish their execution.
We may want to wait for a specific event to occur.
• The processes should then signal when that event has occurred.
File Management
• System calls used:
➢ create file, delete file
➢ open, close
➢ read, write, reposition
➢ get file attributes, set file attributes
• Working procedure:
1. We need to create and delete files.
2. Once the file is created,
→ we need to open it and to use it.
→ we may also read or write .
Device Management
• System calls used:
➢ request device, release device;
➢ read, write, reposition;
➢ get device attributes, set device attributes;
➢ logically attach or detach devices.
• A program may need additional resources to execute.
• Additional resources may be
→ memory
→ tape drives or
→ files.
• If the resources are available, they can be granted, and control can be returned to the user program; If the
resources are unavailable, the program may have to wait until sufficient resources are available.
• Files can be thought of as virtual devices. Thus, many of the system calls used for files are also used
for devices.
• In multi-user environment,
1. We must first request the device, to ensure exclusive use of it.
2. After we are finished with the device, we must release it.
• Once the device has been requested (and allocated), we can read and write the device.
• Due to lot of similarity between I/O devices and files, OS (like UNIX) merges the two into a combined
file-device structure.
• UNIX merges I/O devices and files into a combined file-device structure.
Information Maintenance
• System calls used:
➢ get time or date, set time or date
➢ get system data, set system data
➢ get process, file, or device attributes
➢ set process, file, or device attributes
• Many system calls exist simply for the purpose of transferring information between the user program
and the OS.
For ex,
1. Most systems have a system call to return
→ current time and
→ current date.
2. Other system calls may return information about the system, such as
→ number of current users
→ version number of the OS
→ amount of free memory or disk space.
3. The OS keeps information about all its processes, and there are system
calls toaccess this information.
Communication
• System calls used:
➢ create, delete communication connection
➢ send, receive messages
➢ transfer status information
➢ attach or detach remote devices
• Two models of communication.
1. Message-passing model and 2) Shared Memory Model
Implementation
• OS's are nowadays written in higher-level languages like C/C++
• Advantages of higher-level languages:
1. Faster development and
2. OS is easier to port.
• Disadvantages of higher-level languages:
1) Reduced speed and
2) Increased storage requirements.
Figure A layered OS
iii. Micro-Kernels
• Main function:
To provide a communication facility between
→ client program and
→ various services running in user-space.
• Communication is provided by message passing (Figure 1.20).
• All non-essential components are
→ removed from the kernel and
→ implemented as system- & user-programs.
• Advantages:
1. Ease of extending the OS. (New services are added to user space w/o
modification of kernel).
2. Easier to port from one hardware design to another.
3. Provides more security & reliability.(If a service fails, rest of the OS
remains untouched.).
4. Provides minimal process and memory management.
• Disadvantage:
1) Performance decreases due to increased system function overhead.
iv. Modules
• The kernel has
→ set of core components and
→ dynamic links in additional services during boot time( or run time).
• Seven types of modules in the kernel (Figure 1.21):
1. Scheduling classes
2. File systems
3. Loadable system calls
4. Executable formats
6. Miscellaneous
7. Device and bus drivers
• The top layers include
→ application environments and
→ set of services providing a graphical interface to applications.
• Kernel environment consists primarily of
→ Mach microkernel and
→ BSD kernel.
• Mach provides
→ memory management;
→ support for RPCs & IPC and
→ thread scheduling.
• BSD component provides
→ BSD command line interface
→ support for networking and file systems and
→ implementation of POSIX APIs
• The kernel environment provides an I/O kit for development of
→ device drivers and
→ dynamic loadable modules (which Mac OS X refers to as kernel extensions).
Virtual Machines
Q) What are VMs? Explain its implementation and benefits
Q) What are virtual machines? Explain with an example
Q) What are VMs? Explain the implementation of JVM, What is the advantage of
JIT
IDEA : is to abstract the hardware of a single computer (the CPU, memory, disk drives, network
interface cards, and so forth) into several different execution environments, thereby creating the
illusion that each separate execution environment is running its own private computer.
• Creates an illusion that a process has its own processor with its own memory.
• Host OS is the main OS installed in system and the other OS installed in the system are called guest
OS.
Implementation
• The virtual-machine concept is useful, it is difficult to implement.
• Additional work is required to provide an exact duplicate of the underlying machine.
• Remember that the underlying machine has two modes: user mode and kernel mode.
• The virtual-machine software can run in kernel mode, since it is the operating system. The virtual
machine itself can execute in only user mode.
Benefits
• Able to share the same hardware and run several different execution environments(OS).
• Host system is protected from the virtual machines and the virtual machines are protected from
one another.
• Errors in one OS will not affect the other guest systems and host systems.
• Even though the virtual machines are separated from one another, software resources can be
Examples
VMware
• VMware is a popular commercial application that abstracts Intel 80X86 hardware
into isolated virtual machines.
• The virtualization tool runs in the user-layer on top of the host OS.
• VMware runs as an application on a host operating system such as Windows or Linux
• In below scenario, Linux is running as the host operating system; FreeBSD, Windows NT, and Windows
XP are running as guest operating systems.
• The virtualization layer is the heart of VMware, as it abstracts the physical hardware into isolated virtual
machines running as guest operating systems.
• Each virtual machine has its own virtual CPU, memory, disk drives, network interfaces, and so forth.
• For each Java class, the compiler produces an architecture-neutral bytecode output
• The JVM consists of a class loader and a Java interpreter that executes the architecture-neutral
bytecodes
• The class loader loads the compiled .Class files from both the Java program and the Java API for
execution by the Java interpreter.
• After a class is loaded, the verifier checks that the . class file is valid Java bytecode and does not
overflow or underflow the stack.
• The JVM automatically manages memory by performing garbage collection-the practice of
reclaiming memory from objects no longer in use and returning it to the system.
• Many researches focuse on garbage collection algorithms for increasing the performance
• The JVM may be implemented in software on top of a host operating system, such as Windows,
Linux, or Mac as x, or as part of a web browser.
• Alternatively, the JVM may be implemented in hardware on a chip specifically designed to run
Java programs.
• If the JVM is implemented in software, theJava interpreter interprets the bytecode operations one at
a time.
• A faster software technique is to use a just-in-time JIT compiler.
Figure: JVM
System Boot
• Operating system must be made available to hardware so hardware can start it.
• Small piece of code – bootstrap loader, locates the kernel, loads it into memory, and starts it
Sometimes two-step process where boot block at fixed location loads bootstrap loader.
• When power initialized on system, execution starts at a fixed memory location Firmware used to
hold initial boot code
BGS Institute of Technology Operating Systems 17CS64
Module-2
Process Concepts
Q) What is process? Explain the states with process state diagram
• A process is a program in execution.
• It also includes
1) Program Counter to indicate the current activity.
2) Registers Content of the processor.
3) Process Stack contains temporary data.
4) Data Section contains global variables.
5) Heap is memory that is dynamically allocated during process run time.
• A program by itself is not a process.
1) A process is an active-entity.
2) A program is a passive-entity such as an executable-file stored on disk.
• A program becomes a process when an executable-file is loaded into memory.
• If you run many copies of a program, each is a separate process.
• The text-sections are equivalent, but the data-sections vary.
51
BGS Institute of Technology Operating Systems 17CS64
→ waiting or
→ halted.
2. Program Counter
➢ This indicates the address of the next instruction to be executed for the process.
3. CPU Registers
➢ These include
→ accumulators (AX)
→ index registers (SI, DI)
→ stack pointers (SP) and
→ general-purpose registers (BX, CX, DX).
4. CPU Scheduling Information
➢ This includes
→ priority of process
→ pointers to scheduling-queues and
→ scheduling-parameters.
5. Memory Management Information
➢ This includes
→ value of base- & limit-registers and
→ value of page-tables( or segment-tables).
6. Accounting Information
➢ This includes
→ amount of CPU time
→ time-limit and
→ process-number.
7. I/O Status Information
➢ This includes
→ list of I/O devices
→ list of open files.
Process Scheduling
Objective of multiprogramming:
To have some process running at all times to maximize CPU utilization.
Objective of time-sharing:
To switch the CPU between processes so frequently that users can interact with each program
while it is running.
To meet above 2 objectives: Process scheduler is used to select an available process for program-
execution on the CPU.
52
BGS Institute of Technology Operating Systems 17CS64
Scheduling Queues
• Three types of scheduling-queues:
1) Job Queue
➢ This consists of all processes in the system.
➢ As processes enter the system, they are put into a job-queue.
2) Ready Queue
➢ This consists of the processes that are
3) Device Queue
➢ This consists of the processes that are waiting for an I/O device.
➢ Each device has its own device-queue.
53
BGS Institute of Technology Operating Systems 17CS64
1) The process could issue an I/0 request and then be placed in an I/0 queue.
2) The process could create a new subprocess and wait for the subprocess's termination.
3) The process could be interrupted and put back in the ready-queue.
Q) What is Scheduler? Types, Differentiate b/w long term and short term schedulers
3 types of schedulers:
1. A long-term scheduler or Job scheduler – selects jobs from the job pool (of secondary
memory, disk) and loads them into the memory.
2. The short-term scheduler, or CPU Scheduler – selects job from memory and assigns the
CPU to it.
3. The medium-term scheduler - selects the process in ready queue and reintroduced into the
memory
54
BGS Institute of Technology Operating Systems 17CS64
Context Switch
Context-switch means saving the state of the old process and switching the CPU to another process.
The context of a process is represented in the PCB of the process; it includes
→ value of CPU registers
→ process-state and
→ memory-management information.
Disadvantages:
Context-switch time is pure overhead, because the system does no useful work while
switching.
Context-switch times are highly dependent on hardware support.
55
BGS Institute of Technology Operating Systems 17CS64
Process Creation
• A process may create a new process via a create-process system-call.
• The creating process is called a parent-process.
• The new process created by the parent is called the child-process (Sub-process).
• OS identifies processes by pid (process identifier), which is typically an integer-number.
• A process needs following resources to accomplish the task:
→ CPU time
→ memory and
→ I/0 devices.
• Child-process may
Process Termination
• A process terminates when it executes the last statement (in the program).
• Then, the OS deletes the process by using exit() system-call.
• Then, the OS de-allocates all the resources of the process. The resources include
→ memory
→ open files and
→ I/0 buffers.
• Process termination can occur in following cases:
→ A process can cause the termination of another process via Terminate Process() system-call.
• Information Sharing - There may be several processes which need to access the same file. So
the information must be accessible at the same time to all users.
• Computation speedup - Often a solution to a problem can be solved faster if the problem can
be broken down into sub-tasks, which are solved simultaneously
• Modularity - A system can be divided into cooperating modules and executed by sending
information among one another.
• Convenience - Even a single user can work on multiple task by information sharing.
Cooperating processes require some type of inter-process communication. This is allowed by 2 models:
1. Shared Memory model
2. Message passing model
2 Message-Passing Model
A mechanism to allow process communication without sharing address space. It is used in distributed
systems.
• Message passing systems uses system calls for "send message" and "receive message".
• A communication link must be established between the cooperating processes before messages
can be sent.
• There are three methods of creating the link between the sender and the receiver-
o Direct or indirect communication ( naming )
o Synchronous or asynchronous communication (Synchronization)
o Automatic or explicit buffering.
1. Naming
Processes that want to communicate must have a way to refer to each other. They can use either direct
or indirect communication.
a) Direct communication the sender and receiver must explicitly know each other‘s name. The syntax
for send() and receive() functions are as follows-
• send (P, message) – send a message to process P
• receive(Q, message) – receive a message from process Q
Disadvantages of direct communication – any changes in the identifier of a process, may have to
change the identifier in the whole system(sender and receiver), where the messages are sent and
received.
2 processes can communicate only if they have a shared mailbox. The send and receive functions are –
• send(A, message) – send a message to mailbox A
• receive(A, message) – receive a message from mailbox A
2. Synchronization
The send and receive messages can be implemented as either blocking or non-blocking.
Blocking (synchronous) send - sending process is blocked (waits) until the message is
received by receiving process or the mailbox.
Non-blocking (asynchronous) send - sends the message and continues (doesnot wait)
3. Buffering
When messages are passed, a temporary queue is created. Such queue can be of three capacities:
Zero capacity – The buffer size is zero (buffer does not exist). Messages are not stored in
the queue. The senders must block until receivers accept the messages.
Bounded capacity- The queue is of fixed size(n). Senders must block if the queue is full.
After sending ‗n‘ bytes the sender is blocked.
Unbounded capacity - The queue is of infinite capacity. The sender never blocks.
BGS Institute of Technology Operating Systems 17CS64
BGS Institute of Technology Operating Systems 17CS64
Chapter-2
MULTI-THREADED PROGRAMMING
Q) What is thread? Explain the benefits of multithreaded programming
Multi-Threaded Programming
• A thread is a basic unit of CPU utilization.
• Support for threads may be provided either at the user level, for user threads, or by the kernel, for kernel threads
• It consists of
→ thread ID
→ PC
→ register-set and
→ stack.
• It shares code-section & data-section with other threads belonging to the same process
• A traditional (or heavy weight) process has a single thread of control.
• If a process has multiple threads of control, it can perform more than one task at a time. Such a
process is called multi-threaded process
Motivation
1) The software-packages that run on modern PCs are multithreaded.
An application is implemented as a separate process with several threads of control.
For ex: A word processor may have
→ first thread for displaying graphics
→ second thread for responding to keystrokes and
→ third thread for performing grammar checking.
2) In some situations, a single application may be required to perform several similar tasks.
For ex: A web-server may create a separate thread for each client request.
This allows the server to service several concurrent requests.
BGS Institute of Technology Operating Systems 17CS64
3) RPC servers are multithreaded.
When a server receives a message, it services the message using a separate thread.
This allows the server to service several concurrent requests.
4) Most OS kernels are multithreaded;
Several threads operate in kernel, and each thread performs a specific task, such as
→ managing devices or
→ interrupt handling.
Benefits
1) Responsiveness
2) Resource Sharing
• By default, threads share the memory (and resources) of the process to which they
belong. Thus, an application is allowed to have several different threads of activity
within the same address-space.
3) Economy
• Allocating memory and resources for process-creation is costly.
Thus, it is more economical to create and context-switch threads.
One-to-One Model
• Each user thread is mapped to a kernel thread.
1) First Approach
➢ Provides a library entirely in user space with no kernel support.
➢ All code and data structures for the library exist in the user space.
2) Second Approach
➢ Implements a kernel-level library supported directly by the OS.
➢ Code and data structures for the library exist in kernel space.
• Win32 Threads
• The technique for creating threads using the Win32 thread library is similar to
the Pthreads technique
• We must include the windows. h header file when using the Win32 API
• Threads are created using CreateThread ( ) function
• The attributes include security information, the size of the stack,and a flag
• Java Threads
• Threads are the basic model of program-execution in
→ Java program and
→ Java language.
• The API provides a rich set of features for the creation and management of threads.
• All Java programs comprise at least a single thread of control.
• Two techniques for creating threads:
1. Create a new class that is derived from the Thread class and override its run() method.
2) Define a class that implements the Runnable interface. The Runnable interface is defined as
follows:
• Creating a Thread object does not specifically create the new thread; rather, it is the start ( ) method
that actually creates the new thread
• start () method for the new object does two things:
Thread Cancellation
• This is the task of terminating a thread before it has completed.
• Target thread is the thread that is to be canceled
• Thread cancellation occurs in two different cases:
1. Asynchronous cancellation: One thread immediately terminates the target thread.
2. Deferred cancellation: The target thread periodically checks whether it should be terminated.
Signal Handling
• In UNIX, a signal is used to notify a process that a particular event has occurred.
• All signals follow this pattern:
1. A signal is generated by the occurrence of a certain event.
2. A generated signal is delivered to a process.
3. Once delivered, the signal must be handled.
• A signal handler is used to process signals.
• A signal may be received either synchronously or asynchronously, depending on the source.
1) Synchronous signals
➢ Delivered to the same process that performed the operation causing the signal.
➢ E.g. illegal memory access and division by 0.
2) Asynchronous signals
➢ Generated by an event external to a running process.
➢ E.g. user terminating a process with specific keystrokes <ctrl><c>.
Thread Pools
• The basic idea is to
→ create a no. of threads at process-startup and
→ place the threads into a pool (where they sit and wait for work).
• Procedure:
1. When a server receives a request, it awakens a thread from the pool.
2. If any thread is available, the request is passed to it for service.
3. Once the service is completed, the thread returns to the pool.
• Advantages:
1) Servicing a request with an existing thread is usually faster than waiting to
create a thread.
2) The pool limits the no. of threads that exist at any one point.
• No. of threads in the pool can be based on factors such as
→ no. of CPUs
→ amount of memory and
→ expected no. of concurrent client-requests.
Operating Systems
PROCESS SCHEDULING
Basic Concepts
• In a single-processor system,
→ only one process may run at a time.
→ other processes must wait until the CPU is rescheduled.
• Objective of multiprogramming:
→ to have some process running at all times, in order to maximize CPU utilization.
CPU Scheduler
• This scheduler
→ selects a waiting-process from the ready-queue and
→ allocates CPU to the waiting-process.
• The ready-queue could be a FIFO, priority queue, tree and list.
• The records in the queues are generally process control blocks (PCBs) of the processes.
CPU Scheduling
• Four situations under which CPU scheduling decisions take place:
1) When a process switches from the running state to
the waiting state. For ex; I/O request.
2) When a process switches from the running state to
the ready state. For ex: when an interrupt
occurs.
3) When a process switches from the waiting state to
the ready state. For ex: completion of I/O.
4) When a process terminates.
• Scheduling under 1 and 4 is
non-preemptive. Scheduling
under 2 and 3 is preemptive.
Preemptive Scheduling
• This is driven by the idea of prioritized computation.
• Processes that are runnable may be temporarily suspended
• Disadvantages:
1) Incurs a cost associated with access to shared-data.
2) Affects the design of the OS kernel.
Dispatcher
• It gives control of the CPU to the process selected by the short-term scheduler.
• The function involves:
1) Switching context
2) Switching to user mode &
3) Jumping to the proper location in the user program to restart that program.
• It should be as fast as possible, since it is invoked during every process switch.
• Dispatch latency means the time taken by the dispatcher to
→ stop one process and
→ start another running.
Scheduling Criteria
• Different CPU-scheduling algorithms
→ have different properties and
→ may favor one class of processes over another.
• Criteria to compare CPU-scheduling algorithms:
1) CPU Utilization
➢ We must keep the CPU as busy as possible.
➢ In a real system, it ranges from 40% to 90%.
2) Throughput
➢ Number of processes completed per time unit.
Operating Systems
Scheduling Algorithms
• CPU scheduling deals with the problem of deciding which of the processes in the
ready-queue is to be allocated the CPU.
• Following are some scheduling algorithms:
1) FCFS scheduling (First Come First Served)
2) Round Robin scheduling
3) SJF scheduling (Shortest Job First)
4) SRT scheduling
5) Priority scheduling
6) Multilevel Queue scheduling and
Operating Systems
FCFS Scheduling
• The process that requests the CPU first is allocated the CPU first.
• The implementation is easily done using a FIFO queue.
• Procedure:
1) When a process enters the ready-queue, its PCB is linked onto the tail of the queue.
2) When the CPU is free, the CPU is allocated to the process at the queue‘s head.
3) The running process is then removed from the queue.
• Advantage:
1) Code is simple to write & understand.
• Disadvantages:
1) Convoy effect: All other processes wait for one big process to get off the CPU.
2) Non-preemptive (a process keeps the CPU until it releases it).
3) Not good for time-sharing systems.
4) The average waiting time is generally not minimal.
• Example: Suppose that the processes arrive in the order P1, P2, P3.
• Suppose that the processes arrive in the order P2, P3, P1.
• The Gantt chart for the schedule is as follows:
Operating Systems
SJF Scheduling
• The CPU is assigned to the process that has the smallest next CPU burst.
• If two processes have the same length CPU burst, FCFS scheduling is used to break the tie.
• For long-term scheduling in a batch system, we can use the process time limit specified by the
user,
• as the ‗length‘
• SJF can't be implemented at the level of short-term scheduling, because there is no way to
know the length of the next CPU burst
• Advantage:
• The SJF is optimal, i.e. it gives the minimum average waiting time for a given set of processes.
• Disadvantage:
• Determining the length of the next CPU burst.
• SJF algorithm may be either 1) non-preemptive or
• Preemptive
1) .
Preemptive SJF
• If the new process has a shorter next CPU burst than what is left of the executing process, that
process is preempted.
• It is also known as SRTF scheduling (Shortest-Remaining-Time-First).
• Example (for non-preemptive SJF): Consider the following set of processes, with the length of
the CPU-burst time given in milliseconds.
Operating Systems
Priority Scheduling
• A priority is associated with each process.
• The CPU is allocated to the process with the highest priority.
• Equal-priority processes are scheduled in FCFS order.
• Priorities can be defined either internally or externally.
1) Internally-defined priorities.
➢ Use some measurable quantity to compute the priority of a process.
➢ For example: time limits, memory requirements, no. of open files.
2) Externally-defined priorities.
➢ Set by criteria that are external to the OS
➢ For example:
1) Preemptive
➢ The CPU is preempted if the priority of the newly arrived process is higher than the priority of
the currently running process.
2) Non Preemptive
• The new process is put at the head of the ready-queue
• Advantage:
• 1) Higher priority processes can be executed first.
• Disadvantage:
• 1) Indefinite blocking, where low-priority processes are left waiting indefinitely for CPU.
Solution: Aging is a technique of increasing priority of processes that wait in system for a long
time.
• Example: Consider the following set of processes, assumed to have arrived at time 0, in the
order PI, P2, ..., P5, with the length of the CPU-burst time given in milliseconds.
Operating Systems
Separate processes according to the features of their CPU bursts. For example
1) If a process uses too much CPU time, it will be moved to a lower-priority
queue.
¤ This scheme leaves I/O-bound and interactive processes in the higher-priority
queues.
2) If a process waits too long in a lower-priority queue, it may be moved
to a higher- priority queue
¤ This form of aging prevents starvation.