0% found this document useful (0 votes)
29 views

OS Notes Module 1, 2 and 3

The document discusses operating systems and their components. It defines what an operating system is and its goals. It describes the components of a computer system including hardware, operating system, application programs, and users. It also explains user and system views of operating systems and discusses storage structure hierarchy and I/O structure.

Uploaded by

Dhanya Sonu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

OS Notes Module 1, 2 and 3

The document discusses operating systems and their components. It defines what an operating system is and its goals. It describes the components of a computer system including hardware, operating system, application programs, and users. It also explains user and system views of operating systems and discusses storage structure hierarchy and I/O structure.

Uploaded by

Dhanya Sonu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

BGS Institute of Technology Operating Systems 18CS63

MODULE 1
Chapter-1
INTRODUCTION TO OPERATING
SYSTEM
Q) What is an Operating System?
An operating system is system software that acts as an intermediary between a user of a computer
and the computer hardware . It is software that manages the computer hardware and allows the user to
execute programs in a convenient and efficient manner.

Operating system goals:


• Make the computer system convenient to use.
• Use the computer hardware in an efficient manner
• Provide an environment in which user can easily interface with computer.
• It is a resource allocator and deallocator

Computer System Structure (Components of Computer System)

Computer system mainly consists of 4 components-

• Hardware – provides basic computing resources CPU, memory, I/O devices


• Operating system - Controls and coordinates use of hardware among various
applications and users
• Application programs – define the ways in which the system resources are used
to solve the computing problems of the users, Word processors, compilers, web
browsers, database systems, video games
• Users - People, machines, other computers

Figure : Abstract view of the components of a computer system

1Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63
Q) What is OS? Explain from 2 perspectives / views?
Operating System can be viewed from two viewpoints– User views & System views

User Views:-The user‘s view of the operating system depends on the type of user.

Personal Computer:
If the user is using standalone system, then OS is designed for
• Ease of use and high performances.
• Here resource utilization is not given importance.

Main Frame or Mini frame:


If the users are at different terminals connected to a mainframe or minicomputers,
• Information and resources are shared by the terminals
• The OS is designed to maximize resource utilization.
• OS is designed such that the CPU time, memory and i/o are used efficiently and no
single user takes more than the resource allotted to them.
Workstations :
• Connected to networks and servers
• The user have a own system and shares resources and files with other systems.
• Here the OS is designed for both ease of use and resource availability (files).
Embedded systems
• Systems have no or little user interaction
• washing m/c, automobiles LEDs to show the status of its work
Hand held systems
OS to be designed for ease of use and performance per amount of battery life

System Views:- Operating system can be viewed as a resource allocator and control program.

Resource allocator –
• The OS acts as a manager of hardware and software resources.
• CPU time, memory space, file-storage space, I/O devices, shared files
• The OS assigns the resources to the requesting program depending on the
priority.

Control Program – The OS is a control program and manage the execution of user program to
prevent errors and improper use of the computer.

2Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63
Computer System Organization
Computer-system operation
• One or more CPUs, device controllers connect through common bus providing access
to shared memory.
• Each device controller is in-charge of a specific type of device.
• To ensure orderly access to the shared memory, a memory controller is provided
whose function is to synchronize access to the memory.

• When system is switched on, ‗Bootstrap’ program is executed.


• It is the initial program to run in the system.
• This program is stored in read-only memory (ROM) or in electrically erasable
programmable read-only memory (EEPROM).
• It initializes the CPU registers, memory, device controllers and other initial setups. T

Switch on ‗Bootstrap‘ program


• Initializes the registers, memory and I/O devices
• Locates & loads kernel into memory
• Starts with ‗init‘ process
• Waits for interrupt from user.

Interrupt handling –

• The occurrence of an event is usually signaled by an interrupt.


• The interrupt can either be from the hardware or the software.
• Hardware may trigger an interrupt at any time by sending a signal to the CPU.
• Software triggers an interrupt by executing a special operation called a system call (also called
a monitor call).
• When the CPU is interrupted, it stops what it is doing and immediately transfers execution to a
fixed location.
• The fixed location (Interrupt Vector Table) contains the starting address where the service

3Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63
routine for the interrupt is located.
• After the execution of interrupt service routine, the CPU resumes the interrupted computation.
• Interrupts are an important part of computer architecture. Each computer design has its own
interrupt mechanism, but several functions are common. The interrupt must transfer control to
the appropriate interrupt service routine

Q) Explain storage structure hierarchy with neat diagram


Storage Structure
• Computer programs must be in main memory (RAM) to be executed.
• Main memory is the large memory that the processor can access directly.
• It is commonly implemented in a semiconductor technology called dynamic
random-access memory (DRAM).
• All forms of memory provide an array of memory words. Each word has its
own address.
• Interaction with memory is done through a series of load or store instructions.
Load Instruction: Moves a word from main-memory to an internal register within the CPU.
Store Instruction: Moves the content of a register to main-memory.
• A typical instruction-execution cycle based on Von Neumann architecture,
✓ first fetches an instruction from memory and stores that instruction in
the instruction register.
✓ The instruction is then decoded and may cause operands to be fetched
from memory and stored in some internal register.
✓ After the instruction on the operands has been executed, the result may
be stored back in memory.
• Ideally, we want the programs and data to reside in main memory
permanently. This arrangement usually is not possible for the following two
reasons:

4Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63
1. Main memory or primary memory is usually too small to store all needed programs
and data permanently.
2. Main memory is a volatile storage device that loses its contents when power is turned
off.
• Thus, most computer systems provide secondary storage as an extension of
main memory.
• The main requirement for secondary storage is that it will be able to hold large
quantities of data permanently.
• The most common secondary-storage device is a magnetic disk,

Figure: Storage structure hierarchy

• According to speed, cost and capacity.

• The higher levels are expensive, but they are fast.

• As we move down the hierarchy, the cost per bit generally decreases, whereas
the access time and the capacity of storage generally increases.
• In addition to differing in speed and cost, the various storage systems are either
volatile or nonvolatile.
• Volatile storage
• nonvolatile storage

5Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63
Q) Explain I/O structure with neat diagram
I/O Structure
• A computer consists of CPUs and multiple device controllers (Figure 1.1).
• A controller is in charge of a specific type of device.
• The controller maintains
→ some local buffer and
→ set of special-purpose registers.
• Typically, OS has a device-driver for each controller.
Interrupt-driven I/O:
1) Driver loads the appropriate registers within the controller.
2) Controller examines the contents of registers to determine what action to take.
3) Controller transfers data from the device to its local buffer.
4) Controller informs the driver via an interrupt that it has finished its operation.
5) Driver then returns control to the OS.
• Problem: Interrupt-driven I/O produces high overhead when used for bulk data-transfer. Solution:
Use DMA (direct memory access).
• In DMA, the controller transfers blocks of data from buffer-storage directly to main memory
without CPU intervention.

Figure: I/O Structure

6Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63
Q) Explain computer system architecture based on number processors
Q) What are multi-processor systems? Explain the major advantages
Q) What are multi-processor systems? Explain the types of multi-processor systems
Q) Explain Clustered systems
Computer System Architecture
1) Single-Processor Systems
2) Multiprocessor Systems
3) Clustered Systems

Single Processor Systems


• The system has only one general-purpose CPU.
• The CPU is capable of executing a general-purpose instruction-set.
• These systems range from PDAs through mainframes.
• Almost all systems have following processors:

1) Special Purpose Processors


➢ Include disk, keyboard, and graphics controllers.

2) General Purpose Processors


➢ Include I/O processors.
• Special-purpose processors run a limited instruction set and do not run user-processes.

Multi-Processor Systems
• These systems have two or more processors which can share:
→ bus → clock → memory/peripheral devices
Advantages:
1. Increased Throughput
➢ By increasing no. of processors, we expect to get more work done in less time.

2. Economy of Scale
➢ These systems are cheaper because they can share
→ peripherals → mass-storage → power-supply.
➢ If many programs operate on same data, they will be stored on one disk & all processors can
share them.
3. Increased Reliability
➢ The failure of one processor will not halt the system.
Two techniques to maintain ‗Increased Reliability‘ - graceful degradation & fault tolerant

1. Graceful degradation
2. Fault tolerant – When one processor fails, its operations are stopped, the system failure
is then detected, diagnosed, and corrected.

• Two types of multiple-processor systems: 1) Asymmetric multiprocessing (AMP) and


2) Symmetric multiprocessing (SMP)
7Dept. of ISE Dr. Siddartha B K
BGS Institute of Technology Operating Systems 18CS63
1. Asymmetric Multiprocessing
• This uses master-slave relationship
• Each processor is assigned a specific task.
• A master-processor controls the system.
• The other processors look to the master for instruction.
• The master-processor schedules and allocates work to the slave-processors.

2. Symmetric Multiprocessing
• All processors are peers; no master-slave relationship exists between processors.
• Advantages:
1) Many processes can run simultaneously.
2) Processes and resources are shared dynamically among the various processors.
• Disadvantage:
1) Since CPUs are separate, one CPU may be sitting idle while another CPU is overloaded. This
results in inefficiencies.

8Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63
Clustered Systems
• These systems consist of two or more systems coupled together (Figure 1.7).
• These systems share storage & closely linked via LAN.
• Advantage:
1) Used to provide high-availability service.
• High-availability is obtained by adding a level of redundancy in the system.
• Working procedure:
➢ A cluster-software runs on the cluster-nodes.
➢ Each node can monitor one or more other nodes (over the LAN).
➢ If the monitored-node fails, the monitoring-node can
→ take owne rshi p of failed-n ode ‟s stora ge and
→ restart the applications running on the failed-node.
➢ The users and clients of the applications see only a brief interruption of service.
• Two types are: 1) Asymmetric and
2) Symmetric
1) Asymmetric Clustering
• One node is in hot-standby mode while the other nodes are running the applications.
• The hot-standby node does nothing but monitor the active-server.
• If the server fails, the hot-standby node becomes the active server.

2) Symmetric Clustering
• Two or more nodes are running applications, and are monitoring each other.
• Advantage:
1) This mode is more efficient, as it uses all of the available hardware.
• It does require that more than one application be available to run.

Figure: General structure of a clustered systems

9Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63

Q) Write a short note on: a)Batch Systems b) Multi-Programmed Systems c) Time-Sharing Systems

Operating System Structure


1) Batch Systems
2) Multi-Programmed Systems
3) Time-Sharing Systems

Batch Systems
• Early computers were physically enormous machines run froma console.
• The common input devices were card readers and tape drives.
• The common output devices were line printers, tape drives, and card punches.
• The user
→ prepared a job which consisted of the program, the data, and control information
→ submitted the job to the computer-operator.
• The job was usually in the formof punch cards.
• At some later time (after minutes, hours, or days), the output appeared.
• To speed up processing, operators batched together jobs with similar needs and ran them through
the computer as a group.
• Disadvantage:
. The CPU is often idle, because the speeds of the mechanical I/O devices.

Multi-Programmed Systems
• Multiprogramming increases CPU utilization by organizing jobs so that the CPU always has one to
execute.
• The idea is as follows:
1) OS keeps several jobs in memory simultaneously (Figure 1.8).
2) OS picks and begins to execute one of the jobs in the memory. Eventually, the job mayhave to wait for some
task, such as an I/O operation, to complete.
3) OS simply switches to, and executes, another job.
4) When that job needs to wait, the CPU is switched to another job, and so on.
5) As long as at least one job needs to execute, the CPU is never idle.
• If several jobs are ready to be brought into memory, and if there is not enough roomfor all of them,
then the system must choose among them. Making this decision is job scheduling.
• If several jobs are ready to run at the same time, the systemmust choose among them. Making this decision is CPU
scheduling.

Figure: Memory layout for a multiprogramming system

10Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63

Q) What is OS? Explain 2 modes/Dual modes of operations


Operating System Operations
• Modern OS is interrupt driven.
• Events are always signaled by the occurrence of an interrupt or a trap.
• A trap is a software generated interrupt caused either by
→ error (for example division by zero) or
→ request from a user-program that an OS service be performed.
• For each type of interrupt, separate segments of code in the OS determine what action should be
taken.
• ISR (Interrupt Service Routine) is provided that is responsible for dealing with the interrupt.

Dual Mode Operation


• Problem: We must be able to differentiate between the execution of
→ OS code and
→ user-defined code.
Solution: Most computers provide hardware-support.
• Two modes of operation (Figure 1.9): 1) User mode and
2) Kernel mode
• A mode bit is a bit added to the hardware of the computer to indicate the current mode:
i.e. kernel (0) or user (1)

Figure: Transition from user to kernel mode

• Working principle:
1) At system boot time, the hardware starts in kernel-mode.
2) The OS is then loaded and starts user applications in user-mode.
3) Whenever a trap or interrupt occurs, the hardware switches from user-mode to kernel-mode
(that is, changes the state of the mode bit to 0).
4) The system always switches to user-mode (by setting the mode bit to 1) before passing
control to a user-program.
• Dual mode protects
→ OS from errant users and
→ errant users from one another.
• Privileged instruction is executed only in kernel-mode.
• If an attempt is made to execute a privileged instruction in user-mode, the hardware treats it as illegal
and traps it to the OS.
• A system calls are called by user-program to ask the OS to perform the tasks on behalf of the user

11Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63
program.

Q) Explain OS responsibilities in connection with


1. Process management
2 Memory management
3 Mass storage management
4 File management
5 Cache
6 I/O
7 Protection and security

Process Management
• The OS is responsible for the following activities:
1. Creating and deleting both user and system processes
2. Suspending and resuming processes
3. Providing mechanisms for process synchronization
4. Providing mechanisms for process communication
5. Providing mechanisms for deadlock handling
• A process needs following resources to do a task:
→ CPU
→ memory and
→ files.
• The resources are allocated to process
→ when the process is created or
→ while the process is running.
• When the process terminates, the OS reclaims all the reusable resources.
• A program by itself is not a process;
1) A program is a passive entity (such as the contents of a file stored on disk).
2) A process is an active entity.
• Two types of process:
1) Single-threaded process has one PC(program counter) which specifies location of the next
instruction to be executed.
2) Multi-threaded process has one PC per thread which specifies location of next instruction
to execute in each thread
Memory Management
• The OS is responsible for the following activities:
▪ Keeping track of which parts of memory are currently being used and by whom
▪ Deciding which processes are to be loaded into memory when memory space becomes available
▪ Allocating and de-allocating memory space as needed.
▪ Main memory is the array of bytes ranging from hundreds to billions.

12Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63
▪ Each byte has its own address.
• The CPU
→ reads instructions from main memory during the instruction-fetch cycle.
→ reads/writes data from/to main-memory during the data-fetch cycle.
• To execute a program:
1) The program will be
→ loaded into memory and
→ mapped to absolute addresses.
2) Then, program accesses instructions & data from memory by generating absolute addresses.
3) Finally, when program terminates, its memory-space is freed.
• To improve CPU utilization, keep several programs will be kept in memory
• Selection of a memory-management scheme depends on hardware-design of the system.

Storage Management
File-System Management
Mass-Storage Management
Caching
File System Management
• The OS is responsible for following activities:
1) Creating and deleting files.
2) Creating and deleting directories.
3) Supporting primitives for manipulating files & directories.
4) Mapping files onto secondary storage.
5) Backing up files on stable (non-volatile) storage media.
• Computer stores information on different types of physical
media. For ex: magnetic disk, optical disk.
• Each medium is controlled by a device (e.g. disk drive).
• The OS
→ maps files onto physical media and
→ accesses the files via the storage devices
• File is a logical collection of related information.
• File consists of both program & data.
• Data files may be numeric, alphabets or binary.
• When multiple users have access to files, access control (read, write) must be specified.

Mass Storage Management


• The OS is responsible for following activities:
Free-space management
Storage allocation and

13Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63
Disk scheduling.
• Usually, disks used to store
→ data that does not fit in main memory or
→ data that must be kept for a ―long‖ period of time.
• Most programs are stored on disk until loaded into memory.
• The programs use the disk as both the source and destination of their processing.
• Entire speed of computer operation depends on disk and its algorithms.

Caching
• Caching is an important principle of computer systems.
• Information is normally kept in some storage system (such as main memory).
• As it is used, it is copied into a faster storage system called as the cache on a temporary basis.
• When we need a particular piece of information:
1. We first check whether the information is in the cache.
2. If information is in cache, we use the information directly from the cache.
3. If information is not in cache, we use the information from the source
putting a copy in the cache under the assumption that we will need it again soon.
• In addition, internal programmable registers, such as index registers, provide high-speed cache for
main memory.
• The compiler implements the register-allocation and register-replacement algorithms to decide which
information to keep in registers and which to keep in main memory.
• Most systems have an instruction cache to hold the instructions expected to be executed next.
• Most systems have one or more high-speed data caches in the memory hierarchy
• Because caches have limited size, cache management is an important design problem
Careful selection of cache size & of a replacement policy can result in greatly increased performance

I/O Systems
• A memory-management component that includes buffering, caching, and spooling.
• A general device-driver interface.
• Drivers for specific hardware devices.

Protection and Security


• Protection is a mechanism for controlling access of processes or users to resources defined by OS.
• Security means defense of the system against internal and external attacks.
• The attacks include
→ viruses and worms
→ DOS(denial-of-service)
→ identity theft.

14Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63
• Protection and security require the system to be able to distinguish among all its users.
User identities (user IDs) include name and associated number, one per user.
➢ User IDs are associated with all files (or processes) of that user to determine access control.
Group identifier (group ID): can be used to define a group name and the set of users belonging
to that group.
➢ A user can be in one or more groups, depending on operating-system design decisions.

Q) Write a short on Distributed systems


Distributed System
• This is a collection of physically separate, possibly heterogeneous computer-systems.
• Systems are networked to access to the various resources.
• Access to a shared resource increase
→ computation speed
→ functionality
→ data availability and
→ reliability
• A network is a communication path between two or more systems.
• Networks vary by the
→ protocols used
→ distances between nodes and
→ transport media.
• Common network protocol are
→ TCP/IP
→ ATM.
• Networks are characterized based on the distances between their nodes.
→ A local-area network (LAN) connects computers within a building.
→ A wide-area network (WAN) usually links buildings, cities, or countries.
→ A metropolitan-area network (MAN) could link buildings within a city.
• The media to carry networks are equally varied. They include
→ copper wires,
→ fiber strands, and
→ wireless transmissions.

Q) What is Caching? Explain cache coherency


• Caching is an important principle of computer systems.

• Information is normally kept in some storage system (such as main memory).

• As it is used, it is copied into a faster storage system— the cache—as temporary data.

15Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63
• When a particular piece of information is required, first we check whether it is in the cache.

• If it is, we use the information directly from the cache

• If it is not in cache, we use the information from the source, putting a copy in the cache under
the assumption that we will need it again soon.
• Because caches have limited size, cache management is an important design problem.
• Careful selection of the cache size and page replacement policy can result in greatly increased
performance.
• In a hierarchical storage structure, the same data may appear in different levels of the storage
system.

• For example, suppose to retrieve an integer A from magnetic disk to the processing program.
The operation proceeds by first issuing an I/O operation to copy the disk block on which A
resides to main memory. This operation is followed by copying A to the cache and to an
internal register. Thus, the copy of A appears in several places: on the magnetic disk, in main
memory, in the cache, and in an internal register.
• In a multiprocessor environment, in addition to internal registers, each of the CPUs also
contains a local cache. In such an environment, a copy of A may exist simultaneously in
several caches.
• Since the various CPUs can all execute concurrently, any update done to the value of A in one
cache is immediately reflected in all other caches where A resides. This situation is called
cache coherency, and it is usually a hardware problem (handled below the operating-system
level).

16Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63
Q) Write a short note on Real-Time Embedded Systems, Multimedia Systems, Handheld
Systems
Special Purpose Systems
1. Real-Time Embedded Systems
2. Multimedia Systems
3. Handheld Systems

Real-Time Embedded Systems


• Embedded computers are the most prevalent form of computers in existence.
• These devices are found everywhere, from car engines and manufacturing robots to VCRs and
microwave ovens.
• They tend to have very specific tasks.
• The systems they run on are usually primitive, and so the operating systems provide limited features.
• Usually, they prefer to spend their time monitoring & managing hardware devices such as
→ automobile engines and
→ robotic arms.
• Embedded systems almost always run real-time operating systems.
• A real-time system is used when rigid time requirements have been placed on the operation of a
processor.

Multimedia Systems
• Multimedia data consist of audio and video files as well as conventional files.
• These data differ from conventional data in that multimedia data must be delivered(streamed)
according to certain time restrictions.
• Multimedia describes a wide range of applications. These include
→ audio files such as MP3
→ DVD movies
→ video conferencing
→ live webcasts of speeches

Handheld Systems
• Handheld systems include
→ PDAs and
→ cellular telephones.
• Main challenge faced by developers of handheld systems: Limited size of devices.
• Because of small size, most handheld devices have a
→ small amount of memory,
→ slow processors, and
→ small display screens.

17Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63

Q) Write a short note on a) Traditional Computing, b) Client-Server Computing, c) Peer-to-Peer


Computing d) Web-Based Computing
Computing Environments
1. Traditional Computing
2. Client-Server Computing
3. Peer-to-Peer Computing
4. Web-Based Computing

Traditional Computing
• Used in office environment:
➢ PCs connected to a network, with servers providing file and print services.

• Used in home networks:


➢ At home, most users had a single computer with a slow modem.
➢ Some homes have firewalls to protect their networks from security breaches.

• Web technologies are stretching the boundaries of traditional computing.


➢ Companies establish portals, which provide web accessibility to their internal servers.
➢ Network computers are terminals that understand web computing.
➢ Handheld PDAs can connect to wireless networks to use company's web portal.
• Systems were either batch or interactive.
1) Batch system processed jobs in bulk, with predetermined input.
2) Interactive systems waited for input from users.

Client-Server Computing
• Servers can be broadly categorized as : 1) Compute servers and
2) File servers
1) Compute-server system provides an interface to which a client can send a request to perform an
action (for example, read data).
➢ In response, the server executes the action and sends back results to the client.

2) File -server system provides a file-system interface where clients can create, read, and delete files.
➢ For example: web server that delivers files to clients running web browsers.

Figure : General structure of a client–server system.

18Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63

Peer-to-Peer Computing
• All nodes are considered peers, and each may act as either a client or a server(Figure 1.4).
• Advantage:
In a client-server system, the server is a bottleneck;
but in a peer-to-peer system, services can be provided by several nodes distributed
throughout the network.
• A node must first join the network of peers.
• Determining what services are available is done in one of two generalways:
1) When a node joins a network, it registers its service with a centralized lookup service on the network.
➢ Any node desiring a specific service first contacts this centralized lookup service to determine which
node provides the service.
2) A peer broadcasts a request for the service to all other nodes in the network. The node (or nodes)
providing that service responds to the peer.

Figure: Peer-to-peer system with no centralized services

19Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63

Chapter-2
Operating System Services
Q) What is OS? Explain OS services?
• An OS provides an environment for the execution of programs.
• It provides services to

1. users.
2. programs (System)

Services useful for the users


1. User Interface
➢ Almost all OS have a user-interface (UI).
➢ Different interfaces are:

• CLI (Command Line Interface)


¤ This uses
→ text commands and
→ method for entering the text commands.
• Batch Interface
¤ Commands & directives to control those commands are entered into files, and those
files are executed.
• GUI (Graphical User Interface)
¤ The interface is a window-system with a pointing-device to
→ direct I/0
→ choose from menus and
→ make selections.
2. Program Execution
➢ The system must be able to
→ load a program into memory and
→ run the program.
➢ The program must be able to end its execution, either normally or abnormally.

3. I/O Operations
➢ The OS must provide a means to do I/O operations because users cannot control I/O devices
directly.
➢ For specific devices, special functions may be desired (ex: to blank CRT screen).

4. File -System Manipulation


➢ Programs need to
→ read & write files (or directories)
→ create & delete files
20Dept. of ISE Dr. Siddartha B K
BGS Institute of Technology Operating Systems 18CS63

→ search for a given file and


→ allow or deny access to files.
5. Communications
➢ In some situations, one process needs to communicate with another process.
➢ Communications may be implemented via
1. Shared memory or
2. Message passing
➢ In message passing, packets of information are moved between processes by OS.

6. Error Detection
➢ Errors may occur in
→ CPU & memory-hardware (ex: power failure)
→ I/O devices (ex: lack of paper in the printer) and
→ user program (ex: arithmetic overflow)
➢ For each type of error, OS should take appropriate action to ensure correct & consistent
computing.

Useful for efficient operation of the system are:


1) Resource Allocation
➢ When multiple users are logged on the system at the same time, resources must be allocated to
each of them.
➢ The OS manages different types of resources.
➢ Some resources (say CPU cycles) may have special allocation code.
Other resources (say I/O devices) may have general request & release code.
2) Accounting
➢ We want to keep track of
→ which users use how many resources and
→ which kinds of resources.
➢ This record keeping may be used for
→ accounting (so that users can be billed) or
→ gathering usage-statistics.
3) Protection
➢ When several separate processes execute concurrently, it should not be possible for one
process to interfere with the others or with the OS itself.
➢ Protection involves ensuring that all access to resources is controlled.

21Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63

POPULAR USER INTERFACES


• Two ways that users interface with the OS:
1) Command Interpreter (Command-line interface)
2) Graphical User Interface (GUI)

1) Command Interpreter
• Main function:
To get and execute the next user-specified command
• The commands are used to manipulate files i.e. create, copy, print, execute, etc.
• Two general ways to implement:
1) Command interpreter itself contains code to execute command.
2) Commands are implemented through system programs. This is used by UNIX.

2) Graphical User Interfaces


• No entering of commands but the use of a mouse-based window and menu system The mouse is used to move a pointer to the
position of an icon that represents
→ file
→ program or
→ folder
• By clicking on the icon, the program is invoked.

The Bourne shell command interpreter in Solaris 10.

The iPad touchscreen

22Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63

Q) What are system calls? Explain types of system calls


System Calls
• These provide an interface to the OS services.
• These are available as routines written in C and C++.
• The programmers design programs according to an API. (API=application programming
interface).
• The API
→ defines a set of functions that are available to the programmer .
→ includes the parameters passed to functions and the return values.

Types of System Calls


1) Process control
2) File management
3) Device management
4) Information maintenance
5) Communications

Process Control
• System calls used:
➢ end, abort
➢ load, execute
➢ create process, terminate process
➢ get process attributes, set process attributes
➢ wait for time
➢ wait event, signal event
➢ allocate and free memory
• A running program needs to be able to halt its execution either normally (end) or abnormally (abort).

23Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63

• If program runs into a problem, error message may be generated and dumped into a file.
This file can be examined by a debugger to determine the cause of the problem.
• The OS must transfer control to the next invoking command interpreter.
➢ Command interpreter then reads next command.
➢ In interactive system, the command interpreter simply continues with next command.
➢ In GUI system, a pop-up window will request action from user.
How to deal with new process?
• A process executing one program can load and execute another program.
• Where to return control when the loaded program terminates?
The answer depends on the existing program:
1) If control returns to the existing program when the new program terminates, we must save the
memory image of the existing program. (Thus, we have effectively created a mechanism for one
program to call another program).
2) If both programs continue concurrently, we create d a new process to be multiprogrammed.
• We should be able to control the execution of a process. i.e. we should be able to determine and reset
the attributes of a process such as:
→ job's priority or
→ maximum execution time
• We may also want to terminate process that we created if we find that it
→ is incorrect or
→ is no longer needed.
• We may need to wait for processes to finish their execution.
We may want to wait for a specific event to occur.
• The processes should then signal when that event has occurred.

File Management
• System calls used:
➢ create file, delete file
➢ open, close
➢ read, write, reposition
➢ get file attributes, set file attributes
• Working procedure:
1. We need to create and delete files.
2. Once the file is created,
→ we need to open it and to use it.
→ we may also read or write .

24Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63

3. Finally, we need to close the file.


• We need to be able to
→ determine the values of file-attributes and
→ reset the file-attributes if necessary.
• File attributes include
→ file name
→ file type
→ protection codes and
→ accounting information.

Device Management
• System calls used:
➢ request device, release device;
➢ read, write, reposition;
➢ get device attributes, set device attributes;
➢ logically attach or detach devices.
• A program may need additional resources to execute.
• Additional resources may be
→ memory
→ tape drives or
→ files.
• If the resources are available, they can be granted, and control can be returned to the user program; If the
resources are unavailable, the program may have to wait until sufficient resources are available.
• Files can be thought of as virtual devices. Thus, many of the system calls used for files are also used
for devices.
• In multi-user environment,
1. We must first request the device, to ensure exclusive use of it.
2. After we are finished with the device, we must release it.
• Once the device has been requested (and allocated), we can read and write the device.
• Due to lot of similarity between I/O devices and files, OS (like UNIX) merges the two into a combined
file-device structure.
• UNIX merges I/O devices and files into a combined file-device structure.

25Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63

Information Maintenance
• System calls used:
➢ get time or date, set time or date
➢ get system data, set system data
➢ get process, file, or device attributes
➢ set process, file, or device attributes
• Many system calls exist simply for the purpose of transferring information between the user program
and the OS.
For ex,
1. Most systems have a system call to return
→ current time and
→ current date.
2. Other system calls may return information about the system, such as
→ number of current users
→ version number of the OS
→ amount of free memory or disk space.
3. The OS keeps information about all its processes, and there are system
calls toaccess this information.
Communication
• System calls used:
➢ create, delete communication connection
➢ send, receive messages
➢ transfer status information
➢ attach or detach remote devices
• Two models of communication.
1. Message-passing model and 2) Shared Memory Model

26Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63

What is OS? Explain types of system programs


System Programs
• They provide a convenient environment for program development and execution. (System
programs also known as system utilities).
• They can be divided into these categories:
• Six categories of system-programs:
1. File Management
➢ These programs manipulate files i.e. create, delete, copy, and rename files.
2. Status Information
➢ Some programs ask the system for
→ date (or time)
→ amount of memory(or disk space) or
→ no. of users.
➢ These information is then printed to the terminal (or output-device or file).
3. File Modification
➢ Text editors can be used to create and modify the content of files stored on disk.
4. Programming Language Support
➢ Compilers, assemblers, and interpreters for common programming-languages (such as C, C++) are
provided to the user.
5. Program Loading & Execution
➢ The system may provide
→ absolute loaders
→ relocatable loaders
→ linkage editors and
→ overlay loaders.
➢ Debugging-systems are also needed.
6. Communications
➢ These programs are used for creating virtual connections between
→ processes
→ users and
→ computer-systems.
➢ They allow users to
→ browse web-pages
→ send email or
→ log-in remotely.
• Most OSs are supplied with programs that
→ solve common problems orperform common operations.
→ web-browsers
27Dept. of ISE Dr. Siddartha B K
→ word-processors and spreadsheets

Operating System Design & Implementation


Design Goals
• The first problem in designing a system is to
→ define goals and
→ define specifications.
• The design of the system will be affected by
→ Choice of hardware and
→ Type of system such as
1) batch or time shared
2) single user or multiuser
• Two basic groups of requirements: 1) User goals and
2) System goals
1) User Goals
• The system should be
→ Convenient to use
→ Easy to learn and to use
→ Reliable, safe, and fast.
2) System Goals
• The system should be
→ Easy to design
→ implement, and maintain
→ flexible, reliable, error free, and efficient.

Mechanisms & Policies


• Mechanisms determine how to do something.
• Policies determine what will be done.
• Separating policy and mechanism is important for flexibility.
• Policies change over time; mechanisms should be general.

Implementation
• OS's are nowadays written in higher-level languages like C/C++
• Advantages of higher-level languages:
1. Faster development and
2. OS is easier to port.
• Disadvantages of higher-level languages:
1) Reduced speed and
2) Increased storage requirements.

28Dept. of ISE Dr. Siddharth B K


BGS Institute of Technology Operating Systems 18CS63

Operating System Structure


i. Simple Structure
ii. Layered Approach
iii. Micro-kernels
iv. Modules
i. Simple Structure
• These OSs are small, simple, and limited system.
• For example: MS-DOS and UNIX.
1. MS-DOS was written to provide the most functionality in the least space.
➢ Disadvantages:
a. It was not divided into modules carefully
b. The interfaces and levels of functionality are not well separated.

Figure MS-DOS layer structure

2. UNIX was initially limited by hardware functionality.


➢ Two parts of UNIX (Figure 1.18): 1) Kernel and
2) System programs.
➢ The kernel is further separated into a series of interfaces and device drivers.
➢ Everything below the system-call interface and above the physical hardware is the kernel.
➢ The kernel provides following functions through system calls:
→ file system
→ CPU scheduling and
→ memory management.
➢ Disadvantage:
1) Difficult to enhance, as changes in one section badly affects other areas.

29Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63

Figure 1.18 Traditional UNIX system structure

ii. Layered Approach


• The OS is divided into a number of layers.
• Each layer is built on the top of another layer.
• The bottom layer is the hardware.
The highest is the user interface (Figure 1.19).
• A layer is an implementation of an abstract-object.
i.e. The object is made up of
→ data and
→ operations that can manipulate the data.
.
• Higher-layer
→ does not need to know how lower-layer operations are implemented
→ needs to know only what lower-layer operations do.
• Advantage:
1) Simplicity of construction and debugging.
• Disadvantages:
1) Less efficient than other types.
2) Appro priate ly defini ng the variou s layers.(„ .‟ a layer can use only lowe r-la ye rs, careful
planning is necessary).

Figure A layered OS

30Dept. of ISE Dr. Siddartha B K


BGS Institute of Technology Operating Systems 18CS63

iii. Micro-Kernels
• Main function:
To provide a communication facility between
→ client program and
→ various services running in user-space.
• Communication is provided by message passing (Figure 1.20).
• All non-essential components are
→ removed from the kernel and
→ implemented as system- & user-programs.
• Advantages:
1. Ease of extending the OS. (New services are added to user space w/o
modification of kernel).
2. Easier to port from one hardware design to another.
3. Provides more security & reliability.(If a service fails, rest of the OS
remains untouched.).
4. Provides minimal process and memory management.
• Disadvantage:
1) Performance decreases due to increased system function overhead.

Figure 1.20 Architecture of a typical microkernel

iv. Modules
• The kernel has
→ set of core components and
→ dynamic links in additional services during boot time( or run time).
• Seven types of modules in the kernel (Figure 1.21):
1. Scheduling classes
2. File systems
3. Loadable system calls
4. Executable formats

31Dept. of ISE Dr. Siddartha B K


5. STREAMS modules

6. Miscellaneous
7. Device and bus drivers
• The top layers include
→ application environments and
→ set of services providing a graphical interface to applications.
• Kernel environment consists primarily of
→ Mach microkernel and
→ BSD kernel.
• Mach provides
→ memory management;
→ support for RPCs & IPC and
→ thread scheduling.
• BSD component provides
→ BSD command line interface
→ support for networking and file systems and
→ implementation of POSIX APIs
• The kernel environment provides an I/O kit for development of
→ device drivers and
→ dynamic loadable modules (which Mac OS X refers to as kernel extensions).

Figure Solaris loadable modules

32Dept. of ISE Dr. Siddharth B K


BGS Institute of Technology Operating Systems 17CS64

Virtual Machines
Q) What are VMs? Explain its implementation and benefits
Q) What are virtual machines? Explain with an example
Q) What are VMs? Explain the implementation of JVM, What is the advantage of
JIT

IDEA : is to abstract the hardware of a single computer (the CPU, memory, disk drives, network
interface cards, and so forth) into several different execution environments, thereby creating the
illusion that each separate execution environment is running its own private computer.
• Creates an illusion that a process has its own processor with its own memory.
• Host OS is the main OS installed in system and the other OS installed in the system are called guest
OS.

Figure: System modes. (A) Non-virtual machine (b) Virtual machine


Virtual machines first appeared as the VM Operating System for IBM mainframes in 1972.

Implementation
• The virtual-machine concept is useful, it is difficult to implement.
• Additional work is required to provide an exact duplicate of the underlying machine.

• Remember that the underlying machine has two modes: user mode and kernel mode.
• The virtual-machine software can run in kernel mode, since it is the operating system. The virtual
machine itself can execute in only user mode.

Benefits
• Able to share the same hardware and run several different execution environments(OS).

• Host system is protected from the virtual machines and the virtual machines are protected from
one another.

• Errors in one OS will not affect the other guest systems and host systems.
• Even though the virtual machines are separated from one another, software resources can be

33Dept. of ISE Dr. Siddharth B K


• During system development time. User programs are executed in one virtual machine and system
development is done in another environment.
• It is a perfect vehicle for OS‟s R&D: Multiple OS can be running on the developer‘s system
concurrently. This helps in rapid porting and testing of programmer‘s code in different
environments.
• System consolidation – two or more systems are made to run in a single system.

Examples
VMware
• VMware is a popular commercial application that abstracts Intel 80X86 hardware
into isolated virtual machines.
• The virtualization tool runs in the user-layer on top of the host OS.
• VMware runs as an application on a host operating system such as Windows or Linux

• In below scenario, Linux is running as the host operating system; FreeBSD, Windows NT, and Windows
XP are running as guest operating systems.

• The virtualization layer is the heart of VMware, as it abstracts the physical hardware into isolated virtual
machines running as guest operating systems.

• Each virtual machine has its own virtual CPU, memory, disk drives, network interfaces, and so forth.

Figure: VMware architecture

The Java Virtual Machine(JVM)


• Java is a popular object-oriented programming language introduced by Sun Microsystems in 1995.
• In addition to a language specification and a large AP library, Java also provides a specification for
a Java virtual machine-or JVM.

34Dept. of ISE Dr. Siddharth B K


• Java objects are specified with the class construct; a Java program
consists of one or more classes.

• For each Java class, the compiler produces an architecture-neutral bytecode output
• The JVM consists of a class loader and a Java interpreter that executes the architecture-neutral
bytecodes
• The class loader loads the compiled .Class files from both the Java program and the Java API for
execution by the Java interpreter.
• After a class is loaded, the verifier checks that the . class file is valid Java bytecode and does not
overflow or underflow the stack.
• The JVM automatically manages memory by performing garbage collection-the practice of
reclaiming memory from objects no longer in use and returning it to the system.
• Many researches focuse on garbage collection algorithms for increasing the performance
• The JVM may be implemented in software on top of a host operating system, such as Windows,
Linux, or Mac as x, or as part of a web browser.
• Alternatively, the JVM may be implemented in hardware on a chip specifically designed to run
Java programs.
• If the JVM is implemented in software, theJava interpreter interprets the bytecode operations one at
a time.
• A faster software technique is to use a just-in-time JIT compiler.

Figure: JVM

System Boot
• Operating system must be made available to hardware so hardware can start it.
• Small piece of code – bootstrap loader, locates the kernel, loads it into memory, and starts it
Sometimes two-step process where boot block at fixed location loads bootstrap loader.
• When power initialized on system, execution starts at a fixed memory location Firmware used to
hold initial boot code
BGS Institute of Technology Operating Systems 17CS64

Module-2
Process Concepts
Q) What is process? Explain the states with process state diagram
• A process is a program in execution.
• It also includes
1) Program Counter to indicate the current activity.
2) Registers Content of the processor.
3) Process Stack contains temporary data.
4) Data Section contains global variables.
5) Heap is memory that is dynamically allocated during process run time.
• A program by itself is not a process.
1) A process is an active-entity.
2) A program is a passive-entity such as an executable-file stored on disk.
• A program becomes a process when an executable-file is loaded into memory.
• If you run many copies of a program, each is a separate process.
• The text-sections are equivalent, but the data-sections vary.

Figure 1.23 Process in memory


Process State
• As a process executes, it changes state.
• Each process may be in one of the following states (Figure 1.24):
1. New: The process is being created.
2. Running: Instructions are being executed.
3. Waiting: The process is waiting for some event to occur (such as I/0
completions).
Dept. of ISE Dr. Siddartha B K
50
BGS Institute of Technology Operating Systems 17CS64

4. Ready: The process is waiting to be assigned to a processor.


5. Terminated: The process has finished execution.
• Only one process can be running on any processor at any instant.

Figure 1.24 Diagram of process state

Q) What is Process? Explain Process control block(PCB) or Task control block


(TCB)
Process Control Block or Task Control Block

• In OS, each process is represented by a PCB (Process Control Block).


Figure: Process control block (PCB) or TCB
• PCB contains following information about the process:
1. Process State
➢ The current state of process may be
→ new
→ ready
→ running

51
BGS Institute of Technology Operating Systems 17CS64

→ waiting or
→ halted.
2. Program Counter
➢ This indicates the address of the next instruction to be executed for the process.
3. CPU Registers
➢ These include
→ accumulators (AX)
→ index registers (SI, DI)
→ stack pointers (SP) and
→ general-purpose registers (BX, CX, DX).
4. CPU Scheduling Information
➢ This includes
→ priority of process
→ pointers to scheduling-queues and
→ scheduling-parameters.
5. Memory Management Information
➢ This includes
→ value of base- & limit-registers and
→ value of page-tables( or segment-tables).
6. Accounting Information
➢ This includes
→ amount of CPU time
→ time-limit and
→ process-number.
7. I/O Status Information
➢ This includes
→ list of I/O devices
→ list of open files.

Process Scheduling
Objective of multiprogramming:
To have some process running at all times to maximize CPU utilization.
Objective of time-sharing:
To switch the CPU between processes so frequently that users can interact with each program
while it is running.
To meet above 2 objectives: Process scheduler is used to select an available process for program-
execution on the CPU.

52
BGS Institute of Technology Operating Systems 17CS64

Scheduling Queues
• Three types of scheduling-queues:

1) Job Queue
➢ This consists of all processes in the system.
➢ As processes enter the system, they are put into a job-queue.

2) Ready Queue
➢ This consists of the processes that are

→ residing in main-memory and


→ ready & waiting to execute
➢ This queue is generally stored as a linked list.
➢ A ready-queue header contains pointers to the first and final PCBs in the list.
➢ Each PCB has a pointer to the next PCB in the ready-queue.

3) Device Queue
➢ This consists of the processes that are waiting for an I/O device.
➢ Each device has its own device-queue.

Figure: The ready-queue and various I/O device-queues

Q) Explain the process scheduling with queuing diagram


When the process is executing, one of following events could occur

53
BGS Institute of Technology Operating Systems 17CS64

1) The process could issue an I/0 request and then be placed in an I/0 queue.
2) The process could create a new subprocess and wait for the subprocess's termination.
3) The process could be interrupted and put back in the ready-queue.

Fig : Queueing-diagram representation of process scheduling

Q) What is Scheduler? Types, Differentiate b/w long term and short term schedulers
3 types of schedulers:
1. A long-term scheduler or Job scheduler – selects jobs from the job pool (of secondary
memory, disk) and loads them into the memory.
2. The short-term scheduler, or CPU Scheduler – selects job from memory and assigns the
CPU to it.
3. The medium-term scheduler - selects the process in ready queue and reintroduced into the
memory

Long-Term Scheduler Short-Term Scheduler


Also called job scheduler. Also called CPU scheduler.
Selects which processes should be brought into the ready- Selects which process should be executed next and
queue. allocates CPU.
Need to be invoked only when a processleaves the Need to be invoked to select a new process for the CPU
systemand therefore executes and therefore executes much
much less frequently. more frequently.
May be slow „,‟ minutes may separate the Must be fast „,‟ a process may execute for
creation of one new process and the next. only a few milliseconds.
Controls the degree of multiprogramming.

Processes can be described as either:


I/O-bound Process

54
BGS Institute of Technology Operating Systems 17CS64

Spends more time doing I/O operation than doing computations.


Many short CPU bursts.
CPU-bound Process
Spends more time doing computations than doing I/O operation.
Few very long CPU bursts.
Why long-term scheduler should select a good process mix of I/O-bound and CPU-bound processes ?
1) If all processes are I/0 bound, then
i) Ready-queue will almost always be empty, and
ii) Short-term scheduler will have little to do.
2) If all processes are CPU bound, then
i) I/0 waiting queue will almost always be empty (devices will go unused) and
ii) System will be unbalanced.
Some time-sharing systems have medium-term scheduler
• The scheduler removes processes from memory and thus reduces the degree of
multiprogramming.
• Later, the process can be reintroduced into memory, and its execution can be continued where it
left off. This scheme is called swapping.
• The process is swapped out, and is later swapped in, by the scheduler.

Figure: Addition of medium-term scheduling to the queueing diagram

Context Switch
Context-switch means saving the state of the old process and switching the CPU to another process.
The context of a process is represented in the PCB of the process; it includes
→ value of CPU registers
→ process-state and
→ memory-management information.
Disadvantages:
Context-switch time is pure overhead, because the system does no useful work while
switching.
Context-switch times are highly dependent on hardware support.

55
BGS Institute of Technology Operating Systems 17CS64

Q) Explain the operations on process? What is cascaded termination


Operations on Processes
Process Creation and
Process Termination

Process Creation
• A process may create a new process via a create-process system-call.
• The creating process is called a parent-process.
• The new process created by the parent is called the child-process (Sub-process).
• OS identifies processes by pid (process identifier), which is typically an integer-number.
• A process needs following resources to accomplish the task:
→ CPU time
→ memory and
→ I/0 devices.
• Child-process may

→ get resources directly from the OS or


→ get resources of parent-process. This prevents any process from overloading the system.
• Two options exist when a process creates a new process:
1) The parent & the children execute concurrently.
2) The parent waits until all the children have terminated.
• Two options exist in terms of the address-space of the new process:
1) The child-process is a duplicate of the parent-process (it has the same program and data as
the parent).
2) The child-process has a new program loaded into it.

Process creation in UNIX


• In UNIX, each process is identified by its process identifier (pid), which is a unique integer.
• A new process is created by the fork() system-call (Figure 1.29 & 1.30).
• The new process consists of a copy of the address-space of the original process.
• Both the parent and the child continue execution with one difference:
1) The return value for the fork() is
zero for the new (child) process.
2) The return value for the fork() is
nonzero pid of the child for the parent-process.
• Typically, the exec() system-call is used after a fork() system-call by one of the two processes to
BGS Institute of Technology Operating Systems 17CS64

replace the process's memory-space with a new program.


• The parent can issue wait() system-call to move itself off the ready-queue.

Process Termination
• A process terminates when it executes the last statement (in the program).
• Then, the OS deletes the process by using exit() system-call.
• Then, the OS de-allocates all the resources of the process. The resources include

→ memory
→ open files and
→ I/0 buffers.
• Process termination can occur in following cases:
→ A process can cause the termination of another process via Terminate Process() system-call.

→ Users could arbitrarily kill the processes.


• A parent terminates the execution of children for following reasons:

• The child has exceeded its usage of some resources.


• The task assigned to the child is no longer required.
• The parent is exiting, and the OS does not allow a child to continue.
Cascading termination.
In some systems, if a process terminates, then all its children must also be
terminated. This phenomenon is referred to as cascading termination.
BGS Institute of Technology Operating Systems 17CS64

Interprocess Communication (IPC)


Q) What is interprocess communication? Explain IPC models.
Interprocess Communication- Processes executing may be either co-operative or independent
processes.
• Independent Processes – processes that cannot affect other processes or be affected by other
processes executing in the system.
• Cooperating Processes – processes that can affect other processes or be affected by other
processes executing in the system.

Co-operation among processes are allowed for following reasons –

• Information Sharing - There may be several processes which need to access the same file. So
the information must be accessible at the same time to all users.
• Computation speedup - Often a solution to a problem can be solved faster if the problem can
be broken down into sub-tasks, which are solved simultaneously
• Modularity - A system can be divided into cooperating modules and executed by sending
information among one another.
• Convenience - Even a single user can work on multiple task by information sharing.

Cooperating processes require some type of inter-process communication. This is allowed by 2 models:
1. Shared Memory model
2. Message passing model

1. Shared Memory model


• Communicating-processes must establish a region of shared-memory.
• A shared-memory resides in address-space of the process creating the shared memory. Other processes
must attach their address-space to the shared-memory.
• The processes can then exchange information by reading and writing data in the shared-memory.
• The processes are also responsible for ensuring that they are not writing to the same location
simultaneously.
• For ex, Producer-Consumer Problem:
Producer-process produces information that is consumed by a consumer-process
• Two types of buffers can be used:
1) Unbounded-Buffer places no practical limit on the size of the buffer.
2) Bounded-Buffer assumes that there is a fixed buffer-size.
• Advantages:
1) Allows maximum speed and convenience of communication.
2) Faster
BGS Institute of Technology Operating Systems 17CS64

2 Message-Passing Model
A mechanism to allow process communication without sharing address space. It is used in distributed
systems.
• Message passing systems uses system calls for "send message" and "receive message".
• A communication link must be established between the cooperating processes before messages
can be sent.
• There are three methods of creating the link between the sender and the receiver-
o Direct or indirect communication ( naming )
o Synchronous or asynchronous communication (Synchronization)
o Automatic or explicit buffering.

1. Naming
Processes that want to communicate must have a way to refer to each other. They can use either direct
or indirect communication.

a) Direct communication the sender and receiver must explicitly know each other‘s name. The syntax
for send() and receive() functions are as follows-
• send (P, message) – send a message to process P
• receive(Q, message) – receive a message from process Q

Properties of communication link :


• A link is established automatically between every pair of processes that wants to
communicate. The processes need to know only each other's identity to communicate.
• A link is associated with exactly one pair of communicating processes
• Between each pair, there exists exactly one link.

Types of addressing in direct communication –


• Symmetric addressing – the above described communication is symmetric communication.
Here both the sender and the receiver processes have to name each other to communicate.
• Asymmetric addressing – Here only the sender name is mentioned, but the receiving data can
be from any system.
BGS Institute of Technology Operating Systems 17CS64

send(P, message) --- Send a message to process P


receive(id, message). Receive a message from any process

Disadvantages of direct communication – any changes in the identifier of a process, may have to
change the identifier in the whole system(sender and receiver), where the messages are sent and
received.

b) Indirect communication uses shared mailboxes, or ports.


A mailbox or port is used to send and receive messages. Mailbox is an object into which messages
can be sent and received. It has a unique ID. Using this identifier messages are sent and received.

2 processes can communicate only if they have a shared mailbox. The send and receive functions are –
• send(A, message) – send a message to mailbox A
• receive(A, message) – receive a message from mailbox A

Properties of communication link:


• A link is established between a pair of processes only if they have a shared mailbox
• A link may be associated with more than two processes
• Between each pair of communicating processes, there may be any number of links, each link
is associated with one mailbox.
• A mail box can be owned by the operating system. It must take steps to –
• create a new mailbox
• send and receive messages from mailbox
• delete mailboxes.

2. Synchronization
The send and receive messages can be implemented as either blocking or non-blocking.
Blocking (synchronous) send - sending process is blocked (waits) until the message is
received by receiving process or the mailbox.
Non-blocking (asynchronous) send - sends the message and continues (doesnot wait)

Blocking (synchronous) receive - The receiving process is blocked until a message is


available
Non-blocking (asynchronous) receive - receives the message without block. The
received message may be a valid message or null.

3. Buffering
When messages are passed, a temporary queue is created. Such queue can be of three capacities:
Zero capacity – The buffer size is zero (buffer does not exist). Messages are not stored in
the queue. The senders must block until receivers accept the messages.
Bounded capacity- The queue is of fixed size(n). Senders must block if the queue is full.
After sending ‗n‘ bytes the sender is blocked.
Unbounded capacity - The queue is of infinite capacity. The sender never blocks.
BGS Institute of Technology Operating Systems 17CS64
BGS Institute of Technology Operating Systems 17CS64

Chapter-2
MULTI-THREADED PROGRAMMING
Q) What is thread? Explain the benefits of multithreaded programming
Multi-Threaded Programming
• A thread is a basic unit of CPU utilization.
• Support for threads may be provided either at the user level, for user threads, or by the kernel, for kernel threads
• It consists of
→ thread ID
→ PC
→ register-set and
→ stack.
• It shares code-section & data-section with other threads belonging to the same process
• A traditional (or heavy weight) process has a single thread of control.
• If a process has multiple threads of control, it can perform more than one task at a time. Such a
process is called multi-threaded process

Single-threaded and multithreaded processes

Motivation
1) The software-packages that run on modern PCs are multithreaded.
An application is implemented as a separate process with several threads of control.
For ex: A word processor may have
→ first thread for displaying graphics
→ second thread for responding to keystrokes and
→ third thread for performing grammar checking.
2) In some situations, a single application may be required to perform several similar tasks.
For ex: A web-server may create a separate thread for each client request.
This allows the server to service several concurrent requests.
BGS Institute of Technology Operating Systems 17CS64
3) RPC servers are multithreaded.
When a server receives a message, it services the message using a separate thread.
This allows the server to service several concurrent requests.
4) Most OS kernels are multithreaded;
Several threads operate in kernel, and each thread performs a specific task, such as
→ managing devices or
→ interrupt handling.

Benefits
1) Responsiveness

• A program may be allowed to continue running even if part of it is blocked.


Thus, increasing responsiveness to the user.

2) Resource Sharing
• By default, threads share the memory (and resources) of the process to which they
belong. Thus, an application is allowed to have several different threads of activity
within the same address-space.

3) Economy
• Allocating memory and resources for process-creation is costly.
Thus, it is more economical to create and context-switch threads.

4) Utilization of Multiprocessor Architectures


• In a multiprocessor architecture, threads may be running in parallel on different
processors.
Thus, parallelism will be increased.

Q) What is thread? Explain the multi-threading models with neat diagram


Multi-Threading Models
Thread is a basic unit of cpu utilization
• Support for threads may be provided at either
1) The user level, for user threads or
2) By the kernel, for kernel threads.
• User-threads are supported above the kernel and are managed without kernel support.
• Kernel-threads are supported and managed directly by the OS.
• Three ways of establishing relationship between user-threads & kernel-threads:
1) Many-to-one model
2) One-to-one model and
3) Many-to-many model.
BGS Institute of Technology Operating Systems 17CS64
Many-to-One Model
• Many user-level threads are mapped to one kernel thread
• Advantage:
1) Thread management is done by the thread library in user space, so it is efficient.
• Disadvantages:
1) The entire process will block if a thread makes a blocking system-call.
2) Multiple threads are unable to run in parallel on multiprocessors.
• For example:
→ Solaris green threads
→ GNU portable threads.

Figure Many-to-one model

One-to-One Model
• Each user thread is mapped to a kernel thread.

Figure One-to-one model


• Advantages:
1 )It provides more concurrency by allowing another thread to run when a thread makes a
blocking system-call.
2) Multiple threads can run in parallel on multiprocessors.
• Disadvantage:
1) Creating a user thread requires creating the corresponding kernel thread.
• For example:
→ Windows NT/XP/2000, Linux
BGS Institute of Technology Operating Systems 17CS64
Many-to-Many Model
• Many user-level threads are multiplexed to a smaller number of kernel threads.
• Advantages:
1. Developers can create as many user threads as necessary
2. The kernel threads can run in parallel on a multiprocessor.
3. When a thread performs a blocking system-call, kernel can schedule another thread for
execution.

Two Level Model


• A variation on the many-to-many model is the two level-model (Figure 2.5).
• Similar to M:M, except that it allows a user thread to be bound to kernel thread.
• For example: HP-UX, Tru64 UNIX

Many-to-many model Two-level model

Q) What is thread? Explain thread libraries


Thread Libraries
• A thread library provides the programmer an API for creating and managing threads
• Two ways of implementation:

1) First Approach
➢ Provides a library entirely in user space with no kernel support.
➢ All code and data structures for the library exist in the user space.

2) Second Approach
➢ Implements a kernel-level library supported directly by the OS.
➢ Code and data structures for the library exist in kernel space.

• Three main thread libraries: 1) POSIX Pthreads


2) Win32 and
3) Java.
BGS Institute of Technology Operating Systems 17CS64
Pthreads
• This is a POSIX standard(JEEE 1003.1c) API for thread creation and synchronization.
• This is a specification for thread-behavior, not an implementation. OS designers may implement the
specification in any way they wish
• Commonly used in: Solaris, Linux, Mac OS X, and Tru64 UNIX
• All Pthreads programs must include the pthread. h header file.
• The statement pthread_t tid declares the identifier for the thread
• Each thread has a set of attributes,including stack size and scheduling information

• Win32 Threads
• The technique for creating threads using the Win32 thread library is similar to
the Pthreads technique
• We must include the windows. h header file when using the Win32 API
• Threads are created using CreateThread ( ) function
• The attributes include security information, the size of the stack,and a flag

• Java Threads
• Threads are the basic model of program-execution in
→ Java program and
→ Java language.
• The API provides a rich set of features for the creation and management of threads.
• All Java programs comprise at least a single thread of control.
• Two techniques for creating threads:
1. Create a new class that is derived from the Thread class and override its run() method.
2) Define a class that implements the Runnable interface. The Runnable interface is defined as
follows:

• Creating a Thread object does not specifically create the new thread; rather, it is the start ( ) method
that actually creates the new thread
• start () method for the new object does two things:

1. It allocates memory and initializes a new thread in the JVM.


2. It calls the run ( ) method
BGS Institute of Technology Operating Systems 17CS64
Q) What is thread? Explain threading issues
Threading Issues
fork() and exec() System-calls
• fork() is used to create a separate, duplicate process.
• If one thread in a program calls fork(), then
1. Some systems duplicates all threads and
2. Other systems duplicate only the thread that invoked the fork().
• If a thread invokes the exec(), the program specified in the parameter to exec() will replace the entire
process including all threads.

Thread Cancellation
• This is the task of terminating a thread before it has completed.
• Target thread is the thread that is to be canceled
• Thread cancellation occurs in two different cases:
1. Asynchronous cancellation: One thread immediately terminates the target thread.
2. Deferred cancellation: The target thread periodically checks whether it should be terminated.

Signal Handling
• In UNIX, a signal is used to notify a process that a particular event has occurred.
• All signals follow this pattern:
1. A signal is generated by the occurrence of a certain event.
2. A generated signal is delivered to a process.
3. Once delivered, the signal must be handled.
• A signal handler is used to process signals.
• A signal may be received either synchronously or asynchronously, depending on the source.
1) Synchronous signals
➢ Delivered to the same process that performed the operation causing the signal.
➢ E.g. illegal memory access and division by 0.

2) Asynchronous signals
➢ Generated by an event external to a running process.
➢ E.g. user terminating a process with specific keystrokes <ctrl><c>.

• Every signal can be handled by one of two possible handlers:


1) A Default Signal Handler
➢ Run by the kernel when handling the signal.

2) A User-defined Signal Handler


➢ Overrides the default signal handler.
Operating Systems

• In single-threaded programs , delivering signals is simple.


In multithreaded programs , delivering signals is more complex. Then, the following
options exist:
1) Deliver the signal to the thread to which the signal applies.
2) Deliver the signal to every thread in the process.
3) Deliver the signal to certain threads in the process.
4) Assign a specific thread to receive all signals for the process.

Thread Pools
• The basic idea is to
→ create a no. of threads at process-startup and
→ place the threads into a pool (where they sit and wait for work).
• Procedure:
1. When a server receives a request, it awakens a thread from the pool.
2. If any thread is available, the request is passed to it for service.
3. Once the service is completed, the thread returns to the pool.
• Advantages:
1) Servicing a request with an existing thread is usually faster than waiting to
create a thread.
2) The pool limits the no. of threads that exist at any one point.
• No. of threads in the pool can be based on factors such as
→ no. of CPUs
→ amount of memory and
→ expected no. of concurrent client-requests.
Operating Systems

PROCESS SCHEDULING

Basic Concepts
• In a single-processor system,
→ only one process may run at a time.
→ other processes must wait until the CPU is rescheduled.
• Objective of multiprogramming:
→ to have some process running at all times, in order to maximize CPU utilization.

CPU-I/0 Burst Cycle


• Process execution consists of a cycle of
→ CPU execution and
→ I/O wait (Figure 2.6 & 2.7).
• Process execution begins with a CPU burst, followed by an I/O burst, then another CPU burst,
etc…
• Finally, a CPU burst ends with a request to terminate execution.
• An I/O-bound program typically has many short CPU bursts.
A CPU-bound program might have a few long CPU bursts.
Alternating sequence of CPU and I/O
bursts
Operating Systems

CPU Scheduler
• This scheduler
→ selects a waiting-process from the ready-queue and
→ allocates CPU to the waiting-process.
• The ready-queue could be a FIFO, priority queue, tree and list.
• The records in the queues are generally process control blocks (PCBs) of the processes.

CPU Scheduling
• Four situations under which CPU scheduling decisions take place:
1) When a process switches from the running state to
the waiting state. For ex; I/O request.
2) When a process switches from the running state to
the ready state. For ex: when an interrupt
occurs.
3) When a process switches from the waiting state to
the ready state. For ex: completion of I/O.
4) When a process terminates.
• Scheduling under 1 and 4 is
non-preemptive. Scheduling
under 2 and 3 is preemptive.

Non Preemptive Scheduling


• Once the CPU has been allocated to a process, the process keeps the CPU until it releases
the CPU
either
→ by terminating or
→ by switching to the waiting state.
Operating Systems

Preemptive Scheduling
• This is driven by the idea of prioritized computation.
• Processes that are runnable may be temporarily suspended
• Disadvantages:
1) Incurs a cost associated with access to shared-data.
2) Affects the design of the OS kernel.

Dispatcher
• It gives control of the CPU to the process selected by the short-term scheduler.
• The function involves:
1) Switching context
2) Switching to user mode &
3) Jumping to the proper location in the user program to restart that program.
• It should be as fast as possible, since it is invoked during every process switch.
• Dispatch latency means the time taken by the dispatcher to
→ stop one process and
→ start another running.

Scheduling Criteria
• Different CPU-scheduling algorithms
→ have different properties and
→ may favor one class of processes over another.
• Criteria to compare CPU-scheduling algorithms:
1) CPU Utilization
➢ We must keep the CPU as busy as possible.
➢ In a real system, it ranges from 40% to 90%.

2) Throughput
➢ Number of processes completed per time unit.
Operating Systems

➢ For long processes, throughput may be 1 process per hour;


For short transactions, throughput might be 10 processes per second.
3) Turnaround Time
➢ The interval from the time of submission of a process to the time of completion.
➢ Turnaround time is the sum of the periods spent

→ waiting to get into memory


→ waiting in the ready-queue
→ executing on the CPU and
→ doing I/O.
4) Waiting Time
➢ The amount of time that a process spends waiting in the ready-queue.
5) Response Time
➢ The time from the submission of a request until the first response is produced.
➢ The time is generally limited by the speed of the output device.
• We want
→ to maximize CPU utilization and throughput and
→ to minimize turnaround time, waiting time, and response time.

Scheduling Algorithms
• CPU scheduling deals with the problem of deciding which of the processes in the
ready-queue is to be allocated the CPU.
• Following are some scheduling algorithms:
1) FCFS scheduling (First Come First Served)
2) Round Robin scheduling
3) SJF scheduling (Shortest Job First)
4) SRT scheduling
5) Priority scheduling
6) Multilevel Queue scheduling and
Operating Systems

7) Multilevel Feedback Queue scheduling

FCFS Scheduling
• The process that requests the CPU first is allocated the CPU first.
• The implementation is easily done using a FIFO queue.
• Procedure:
1) When a process enters the ready-queue, its PCB is linked onto the tail of the queue.
2) When the CPU is free, the CPU is allocated to the process at the queue‘s head.
3) The running process is then removed from the queue.
• Advantage:
1) Code is simple to write & understand.
• Disadvantages:
1) Convoy effect: All other processes wait for one big process to get off the CPU.
2) Non-preemptive (a process keeps the CPU until it releases it).
3) Not good for time-sharing systems.
4) The average waiting time is generally not minimal.
• Example: Suppose that the processes arrive in the order P1, P2, P3.

• The Gantt Chart for the schedule is as follows:


• Waiting time for P1 = 0; P2 = 24; P3 =
27 Average waiting time: (0 +
24 + 27)/3 = 17

• Suppose that the processes arrive in the order P2, P3, P1.
• The Gantt chart for the schedule is as follows:
Operating Systems

• Waiting time for P1 = 6;P2 = 0; P3 = 3


• Average waiting time: (6 + 0 + 3)/3 = 3

SJF Scheduling
• The CPU is assigned to the process that has the smallest next CPU burst.
• If two processes have the same length CPU burst, FCFS scheduling is used to break the tie.
• For long-term scheduling in a batch system, we can use the process time limit specified by the
user,
• as the ‗length‘
• SJF can't be implemented at the level of short-term scheduling, because there is no way to
know the length of the next CPU burst
• Advantage:
• The SJF is optimal, i.e. it gives the minimum average waiting time for a given set of processes.
• Disadvantage:
• Determining the length of the next CPU burst.
• SJF algorithm may be either 1) non-preemptive or
• Preemptive
1) .

Non preemptive SJF


• The current process is allowed to finish its CPU burst.

Preemptive SJF
• If the new process has a shorter next CPU burst than what is left of the executing process, that
process is preempted.
• It is also known as SRTF scheduling (Shortest-Remaining-Time-First).
• Example (for non-preemptive SJF): Consider the following set of processes, with the length of
the CPU-burst time given in milliseconds.
Operating Systems

• For non-preemptive SJF, the Gantt Chart is as follows:

• Waiting time for P1 = 3; P2 = 16; P3 = 9; P4=0


Average waiting time: (3 + 16 + 9 + 0)/4 = 7
• Example (preemptive SJF): Consider the following set of processes, with the length of the CPU-burst
time given in milliseconds.

• For preemptive SJF, the Gantt Chart is as follows:


• The average waiting time is ((10 - 1) + (1 - 1) + (17 - 2) + (5 - 3))/4 = 26/4 = 6.5.
Operating Systems

Priority Scheduling
• A priority is associated with each process.
• The CPU is allocated to the process with the highest priority.
• Equal-priority processes are scheduled in FCFS order.
• Priorities can be defined either internally or externally.
1) Internally-defined priorities.
➢ Use some measurable quantity to compute the priority of a process.
➢ For example: time limits, memory requirements, no. of open files.
2) Externally-defined priorities.
➢ Set by criteria that are external to the OS
➢ For example:

→ importance of the process


→ political factors
• Priority scheduling can be either preemptive or nonpreemptive.

1) Preemptive
➢ The CPU is preempted if the priority of the newly arrived process is higher than the priority of
the currently running process.

2) Non Preemptive
• The new process is put at the head of the ready-queue
• Advantage:
• 1) Higher priority processes can be executed first.
• Disadvantage:
• 1) Indefinite blocking, where low-priority processes are left waiting indefinitely for CPU.
Solution: Aging is a technique of increasing priority of processes that wait in system for a long
time.
• Example: Consider the following set of processes, assumed to have arrived at time 0, in the
order PI, P2, ..., P5, with the length of the CPU-burst time given in milliseconds.
Operating Systems

• The Gantt chart for the schedule is as follows:


• The average waiting time is 8.2 milliseconds.

Round Robin Scheduling


• Designed especially for timesharing systems.
• It is similar to FCFS scheduling, but with preemption.
• A small unit of time is called a time quantum (or time slice).
• Time quantum is ranges from 10 to 100 ms.
• The ready-queue is treated as a circular queue.
• The CPU scheduler
→ goes around the ready-queue and
→ allocates the CPU to each process for a time interval of up to 1 time quantum.
• To implement:
The ready-queue is kept as a FIFO queue of processes
• CPU scheduler
1) Picks the first process from the ready-queue.
2) Sets a timer to interrupt after 1 time quantum and
3) Dispatches the process.
• One of two things will then happen.
1) The process may have a CPU burst of less than 1 time
quantum. In this case, the process itself will release
the CPU voluntarily.
2) If the CPU burst of the currently running process is longer than 1 time
quantum, the timer will go off and will cause an interrupt to the OS.
The process will be put at the tail of the ready-queue.
Operating Systems
• Advantage:
1) Higher average turnaround than SJF.
• Disadvantage:
1) Better response time than SJF.
• Example: Consider the following set of processes that arrive at time 0, with the length of the CPU-
burst time given in milliseconds.

• The Gantt chart for the schedule is as follows:


• The average waiting time is 17/3 = 5.66 milliseconds.
• The RR scheduling algorithm is preemptive.
No process is allocated the CPU for more than 1 time quantum in a row. If a
process' CPU burst exceeds 1 time quantum, that process is preempted and is
put back in the ready-queue..
• The performance of algorithm depends heavily on the size of the time quantum (Figure 2.8 & 2.9).
1) If time quantum=very large, RR policy is the same as the FCFS policy.
2) If time quantum=very small, RR approach appears to the users as though each of
n processes has its own processor running at l/n the speed of the real processor.
• In software, we need to consider the effect of context switching on the performance of RR scheduling
1) Larger the time quantum for a specific process time, less time is spend on context switching.
2) The smaller the time quantum, more overhead is added for the purpose of context-switching.

Multilevel Queue Scheduling


• Useful for situations in which processes are easily classified into different groups.
• For example, a common division is made between
→ foreground (or interactive) processes and
→ background (or batch) processes.
Operating Systems

• The ready-queue is partitioned into several separate queues (Figure 2.10).


• The processes are permanently assigned to one queue based on some property like
→ memory size
→ process priority or
→ process type.
• Each queue has its own scheduling algorithm.
For example, separate queues might be used for foreground and background processes.

Figure 2.10 Multilevel queue scheduling

• There must be scheduling among the queues, which is commonly implemented as


fixed-priority preemptive scheduling.
For example, the foreground queue may have absolute priority over the background
queue.
• Time slice: each queue gets a certain amount of CPU time which it can schedule
amongst its processes; i.e., 80% to foreground in RR
20% to background in FCFS

Multilevel Feedback Queue Scheduling


• A process may move between queues (Figure 2.11).
• The basic idea:
Operating Systems

Separate processes according to the features of their CPU bursts. For example
1) If a process uses too much CPU time, it will be moved to a lower-priority
queue.
¤ This scheme leaves I/O-bound and interactive processes in the higher-priority
queues.
2) If a process waits too long in a lower-priority queue, it may be moved
to a higher- priority queue
¤ This form of aging prevents starvation.

Figure Multilevel feedback queues.

• In general, a multilevel feedback queue scheduler is defined by the following parameters:


1) The number of queues.
2) The scheduling algorithm for each queue.
3) The method used to determine when to upgrade a process to a higher priority queue.
4) The method used to determine when to demote a process to a lower priority queue.
5) The method used to determine which queue a process will enter when that
process needs service.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy