Unit2
Unit2
UNIT II
EMBEDDED C PROGRAMMING
Context Switching
47 | P a g e
lOMoARcPSD|26885763
48 | P a g e
lOMoARcPSD|26885763
Hence, it is clear that the memory is an important part of the 8051 Microcontroller Architecture
(for that matter, any Microcontroller). So, it is important for us to understand the 8051
Microcontroller Memory Organization i.e., how memory is organized, how the processor
accesses each memory and how to interface external memory with 8051 Microcontroller.
Before going in to the details of the 8051 Microcontroller Memory Organization, we will first
see a little bit about the Computer Architecture and then proceed with memory organization of
8051 Microcontroller.
1.2.3. Types of Computer Architecture
Basically, Microprocessors or Microcontrollers are classified based on the two types of
Computer Architecture: Von Neumann Architecture and Harvard Architecture.
49 | P a g e
lOMoARcPSD|26885763
Harvard Architecture
Harvard Architecture, in contrast to Von Neumann Architecture, uses separate memory for
Instruction (Program) and Data. Since the Instruction Memory and Data Memory are separate in
a Harvard Architecture, their signal paths i.e., buses are also different and hence, the CPU can
access both Instructions and Data at the same time.
Almost all Microcontrollers, including 8051 Microcontroller implement Harvard Architecture.
50 | P a g e
lOMoARcPSD|26885763
In 8051 Microcontroller, the code or instructions to be executed are stored in the Program
Memory, which is also called as the ROM of the Microcontroller. The original 8051
Microcontroller by Intel has 4KB of internal ROM.
Some variants of 8051 like the 8031 and 8032 series doesn’t have any internal ROM (Program
Memory) and must be interfaced with external Program Memory with instructions loaded in it.
Almost all modern 8051 Microcontrollers, like 8052 Series, have 8KB of Internal Program
Memory (ROM) in the form of Flash Memory (ROM) and provide the option of reprogramming
the memory.
In case of 4KB of Internal ROM, the address space is 0000H to 0FFFH. If the address space i.e.,
the program addresses exceed this value, then the CPU will automatically fetch the code from the
external Program Memory.
For this, the External Access Pin (EA Pin) must be pulled HIGH i.e., when the EA Pin is high,
the CPU first fetches instructions from the Internal Program Memory in the address range of
0000H to 0FFFFH and if the memory addresses exceed the limit, then the instructions are
fetched from the external ROM in the address range of 1000H to FFFFH.
51 | P a g e
lOMoARcPSD|26885763
There is another way to fetch the instructions: ignore the Internal ROM and fetch all the
instructions only from the External Program Memory (External ROM). For this scenario, the EA
Pin must be connected to GND. In this case, the memory addresses of the external ROM will be
from 0000H to FFFFH.
52 | P a g e
lOMoARcPSD|26885763
But
almost all modern variants of 8051 Microcontroller have 256B of RAM. In this 256B, the first
128B i.e., memory addresses from 00H to 7FH is divided in to Working Registers (organized as
Register Banks), Bit – Addressable Area and General Purpose RAM (also known as Scratchpad
area).
In the first 128B of RAM (from 00H to 7FH), the first 32B i.e., memory from addresses 00H to
1FH consists of 32 Working Registers that are organized as four banks with 8 Registers in each
Bank.
The 4 banks are named as Bank0, Bank1, Bank2 and Bank3. Each Bank consists of 8 registers
named as R0 – R7. Each Register can be addressed in two ways: either by name or by address.
To address the register by name, first the corresponding Bank must be selected. In order to select
the bank, we have to use the RS0 and RS1 bits of the Program Status Word (PSW) Register (RS0
and RS1 are 3rd and 4th bits in the PSW Register).
When addressing the Register using its address i.e., 12H for example, the corresponding Bank
may or may not be selected. (12H corresponds to R2 in Bank2).
The next 16B of the RAM i.e., from 20H to 2FH are Bit – Addressable memory locations. There
are totally 128 bits that can be addressed individually using 00H to 7FH or the entire byte can be
addressed as 20H to 2FH.
53 | P a g e
lOMoARcPSD|26885763
The designer of an 8051 Microcontroller based system is not limited to the internal RAM and
ROM present in the 8051 Microcontroller. There is a provision of connecting both external RAM
and ROM i.e., Data Memory and Program.
The reason for interfacing external Program Memory or ROM is that complex programs written
in high – level languages often tend to be larger and occupy more memory.
Another important reason is that chips like 8031 or 8032, which doesn’t have any internal ROM,
have to be interfaced with external ROM.
A maximum of 64KB of Program Memory (ROM) and Data Memory (RAM) each can be
interface with the 8051 Microcontroller.
The following image shows the block diagram of interfacing 64KB of External RAM and 64KB
of External ROM with the 8051 Microcontroller.
An important point to remember when interfacing external memory with 8051 Microcontroller is
that Port 0 (P0) cannot be used as an IO Port as it will be used for multiplexed address and data
bus (A0 – A7 and D0 – D7). Not always, but Port 2 may be used as higher byte of the address
bus.
54 | P a g e
lOMoARcPSD|26885763
Embedded C programming plays a key role in performing specific function by the processor. In
day-to-day life we used many electronic devices such as mobile phone, washing machine, digital
camera, etc. These all device working is based on microcontroller that are programmed by
embedded C.
In embedded system programming C code is preferred over other language. Due to the following
reasons:
o Easy to understand
o High Reliability
o Portability
o Scalability
Let's see the block diagram representation of embedded system programming:
Basic Declaration
Let's see the block diagram of Embedded C Programming development:
55 | P a g e
lOMoARcPSD|26885763
Function is a collection of statements that is used for performing a specific task and a collection
of one or more functions is called a programming language. Every language is consisting of
basic elements and grammatical rules. The C language programming is designed for function
with variables, character set, data types, keywords, expression and so on are used for writing a C
program.
#include<microcontroller name.h>
The microcontroller programming is different for each type of operating system. Even though
there are many operating system are exist such as Windows, Linux, RTOS, etc but RTOS has
several advantage for embedded system development.
56 | P a g e
lOMoARcPSD|26885763
Embedded C is one of the most popular and most commonly used Programming Languages in
the development of Embedded Systems. So, we will see some of the Basics of Embedded C
Program and the Programming Structure of Embedded C.
An example for embedded system, which we use daily, is a Wireless Router. In order to get
wireless internet connectivity on our mobile phones and laptops, we often use routers. The task
of a wireless router is to take the signal from a cable and transmit it wirelessly. And take wireless
data from the device (like a mobile) and send it through the cable.
We use washing machines almost daily but wouldn’t get the idea that it is an embedded system
consisting of a Processor (and other hardware as well) and software.
It takes some inputs from the user like wash cycle, type of clothes, extra soaking and rinsing,
spin rpm, etc., performs the necessary actions as per the instructions and finishes washing and
drying the clothes. If no new instructions are given for the next wash, then the washing machines
repeats the same set of tasks as the previous wash.
Embedded Systems can not only be stand-alone devices like Washing Machines but also be a
part of a much larger system. An example for this is a Car. A modern day Car has several
57 | P a g e
lOMoARcPSD|26885763
individual embedded systems that perform their specific tasks with the aim of making a smooth
and safe journey.
Some of the embedded systems in a Car are Anti-lock Braking System (ABS), Temperature
Monitoring System, Automatic Climate Control, Tire Pressure Monitoring System, Engine Oil
Level Monitor, etc.
All these devices have one thing in common: they are programmable i.e., we can write a program
(which is the software part of the Embedded System) to define how the device actually works.
Embedded Software or Program allow Hardware to monitor external events (Inputs / Sensors)
and control external devices (Outputs) accordingly. During this process, the program for an
Embedded System may have to directly manipulate the internal architecture of the Embedded
Hardware (usually the processor) such as Timers, Serial Communications Interface, Interrupt
Handling, and I/O Ports etc.
From the above statement, it is clear that the Software part of an Embedded System is equally
important as the Hardware part. There is no point in having advanced Hardware Components
with poorly written programs (Software).
There are many programming languages that are used for Embedded Systems like Assembly
(low-level Programming Language), C, C++, JAVA (high-level programming languages), Visual
Basic, JAVA Script (Application level Programming Languages), etc.
In the process of making a better embedded system, the programming of the system plays a vital
role and hence, the selection of the Programming Language is very important.
Size: The memory that the program occupies is very important as Embedded Processors
like Microcontrollers have a very limited amount of ROM (Program Memory).
58 | P a g e
lOMoARcPSD|26885763
Speed: The programs must be very fast i.e., they must run as fast as possible. The
hardware should not be slowed down due to a slow running software.
Portability: The same program can be compiled for different processors.
Ease of Implementation
Ease of Maintenance
Readability
Earlier Embedded Systems were developed mainly using Assembly Language. Even though
Assembly Language is closest to the actual machine code instructions and produces small size
hex files, the lack of portability and high amount of resources (time and man power) spent on
developing the code, made the Assembly Language difficult to work with.
There are other high-level programming languages that offered the above mentioned features but
none were close to C Programming Language. Some of the benefits of using Embedded C as the
main Programming Language:
59 | P a g e
lOMoARcPSD|26885763
Processor: The heart of an Embedded System is the Processor. Based on the functionality of the
system, the processor can be anything like a General Purpose Processor, a single purpose
processor, an Application Specific Processor, a microcontroller or an FPGA.
Memory: Memory is another important part of an embedded system. It is divided in to RAM and
ROM. Memory in an Embedded System (ROM to be specific) stores the main program and
RAM stores the program variables and temporary data.
Peripherals: In order to communicate with the outside world or control the external devices, an
Embedded System must have Input and Output Peripherals. Some of these peripherals include
Input / Output Ports, Communication Interfaces, Timers and Counters, etc.
Software: All the hardware work according to the software (main program) written. Software
part of an Embedded System includes initialization of the system, controlling inputs and outputs,
error handling etc.
NOTE: Many Embedded Systems, usually small to medium scaled systems, generally consists
of a Microcontroller as the main processor. With the help of a Microcontroller, the processor,
memory and few peripherals will be integrated in to a single device.
60 | P a g e
lOMoARcPSD|26885763
The C Programming Language became so popular that it is used in a wide range of applications
ranging from Embedded Systems to Super Computers.
The extension in Embedded C from standard C Programming Language include I/O Hardware
Addressing, fixed point arithmetic operations, accessing address spaces, etc.
61 | P a g e
lOMoARcPSD|26885763
62 | P a g e
lOMoARcPSD|26885763
There are two ways you can write comments: one is the single line comments denoted by // and
the other is multiline comments denoted by /*….*/.
Global Variables: Global Variables, as the name suggests, are Global to the program i.e., they
can be accessed anywhere in the program.
Local Variables: Local Variables, in contrast to Global Variables, are confined to their
respective function.
Main Function: Every C or Embedded C Program has one main function, from where the
execution of the program begins.
63 | P a g e
lOMoARcPSD|26885763
In order to write the Embedded C Program for the above circuit, we will use the Keil C
Compiler. This compiler is a part of the Keil µVision IDE. The program is shown below.
64 | P a g e
lOMoARcPSD|26885763
It is very cheap and easily available in variety of shape, color and size. The LEDs are also used
in designing of message display boards and traffic control signal lights etc.
Consider the Proteus Software based simulation of LED blinking using 8051
Microcontroller is shown below:-
65 | P a g e
lOMoARcPSD|26885763
In above Proteus based simulation the LEDs are interfaced to the PORT0 of the 8051
microcontroller.
Let's see the Embedded C Program for generating the LED output sequence as shown
below:
00000001
00000010
00000100.....
.... And so on up to 10000000.
#include<reg51.h>
void main()
{
unsigned int k;
unsigned char l,b;
while(1)
{
P0=0x01;
b=P0;
for(l-0;l<3000;l++);
for(k=0;k<8;k++)
{
b=b<<1;
66 | P a g e
lOMoARcPSD|26885763
P0=b;
}
}
}
Consider the Embedded C Program for generating the LED output sequence as shown
below is:-
00000001
00000011
00000111.....
.... And so on up to 11111111.
#include<reg51.h>
void main()
{
unsigned int i;
unsigned char j,b;
while(1)
{
P0=0x01;
b=P0;
for(j-0;j<3000;j++);
for(j=0;j<8;j++)
{
bb=b<<1;
b=0x01;
P0=b;
}
}
}
Displaying Number on 7-Segment Display using 8051 Microcontroller
Electronic display used for displaying alphanumeric character is known as 7-Segment display it
is used in many systems for displaying the information.
It is constructed using eight LEDs which are connected in sequential way so as to display digits
from 0 to 9, when certain combinations of LEDs are switched on. It displays only one digit at a
time.
Consider the Proteus software based simulation of displaying number on 7-segment display
using 8051 microcontroller is:-
67 | P a g e
lOMoARcPSD|26885763
Consider the program for displaying the number from '0 to F' on 7-segment display is:- 10s
#include<reg51.h>
sbit a= P3^0;
sbit x= P3^1;
sbit y= P3^2;
sbit z= P3^3;
void main()
{
unsigned char m[10]={0?40,0xF9,0?24,0?30,0?19,0?12,0?02,0xF8,0xE00,0?10};
unsigned int i,j;
a=x=y=z=1;
while(1)
{
for(i=0;i<10;i++)
{
P2=m[i];
for(j=0;j<60000;j++);
}
}
}
Consider the program for displaying numbers from '00 to 10' on a 7segment display is:-
#include<reg51.h>
sbit x= P3^0;
sbit y= P3^1;
void display1();
68 | P a g e
lOMoARcPSD|26885763
void display2();
void delay();
void main()
{
unsigned char m[10]={0?40,0xF9,0?24,0?30,0?19,0?12,0?02,0xF8,0xE00,0?10};
unsigned int i,j;
ds1=ds2=0;
while(1)
{
for(i=0,i<20;i++)
display1();
display2();
}
}
void display1()
{
x=1;
y=0;
P2=m[ds1];
delay();
x=1;
y=0;
P2=m[ds1];
delay();
}
void display2()
{
ds1++;
if(ds1>=10)
{
ds1=0;
ds2++;
if(ds2>=10)
{
ds1=ds2=0;
}
}
}
void delay()
{
unsigned int k;
69 | P a g e
lOMoARcPSD|26885763
for(k=0;k<30000;k++);
}
The key difference between an operating system such as Windows and an RTOS often found in
embedded systems is the response time to external events. An ordinary OS provides a non-
deterministic response to events with no guarantee with respect to when they will be processed,
albeit while trying to stay responsive. The user perceiving the OS to be responsive is more
important than handling underlying tasks. On the other hand, an RTOS' goal is fast and more
deterministic reaction.
Developers used to OS’s such as Windows or Linux will be quite familiar with the
characteristics of an embedded RTOS. They are designed to run in systems with limited memory,
and to operate indefinitely without the need to be reset.
Because an RTOS is designed to respond to events quickly and perform under heavy loads, it can
be slower at big tasks when compared to another OS.
The time-criticality of embedded systems vary from soft-real time washing machine control
systems through hard-real time aircraft safety systems. In situations like the latter, the
fundamental demand to meet real-time requirements can only be made if the OS scheduler’s
behavior can be accurately predicted.
Many operating systems give the impression of executing multiple programs at once, but this
multi-tasking is something of an illusion. A single processor core can only run a single thread of
execution at any one time. An operating system’s scheduler decides which program, or thread, to
70 | P a g e
lOMoARcPSD|26885763
run when. By rapidly switching between threads, it provides the illusion of simultaneous
multitasking.
The flexibility of an RTOS scheduler enables a broad approach to process priorities, although an
RTOS is more commonly focused on a very narrow set of applications. An RTOS scheduler
should give minimal interrupt latency and minimal thread switching overhead. This is what
makes an RTOS so relevant for time-critical embedded systems.
Using an RTOS means you can run multiple tasks concurrently, bringing in the basic
connectivity, privacy, security, and so on as and when you need them. An RTOS allows you to
create an optimized solution for the specific requirements of your project.
In a RTOS, Processing time requirement are calculated in tenths of seconds increments of time.
It is time-bound system that can be defined as fixed time constraints. In this type of system,
processing must be done inside the specified constraints. Otherwise, the system will fail.
71 | P a g e
lOMoARcPSD|26885763
72 | P a g e
lOMoARcPSD|26885763
The Scheduler: This component of RTOS tells that in which order, the tasks can be executed
which is generally based on the priority.
Function Library: It is an important element of RTOS that acts as an interface that helps you to
connect kernel and application code. This application allows you to send the requests to the
Kernel using a function library so that the application can give the desired results.
Memory Management: this element is needed in the system to allocate memory to every
program, which is the most important element of the RTOS.
Fast dispatch latency: It is an interval between the termination of the task that can be identified
by the OS and the actual time taken by the thread, which is in the ready queue, that has started
processing.
User-defined data objects and classes: RTOS system makes use of programming languages
like C or C++, which should be organized according to their operation.
73 | P a g e
lOMoARcPSD|26885763
RTOS
74 | P a g e
lOMoARcPSD|26885763
robot hardly on the time., scientific experiments, medical imaging systems, industrial control
systems, weapon systems, robots, air traffic control systems, etc.
2. Task Shifting
Time assigned for shifting tasks in these systems is very less. For example, in older systems,
it takes about 10 microseconds. Shifting one task to another and in the latest systems, it takes
3 microseconds.
3. Focus On Application –
Focus on running applications and less importance to applications that are in the queue.
5. Error Free –
These types of systems are error-free.
75 | P a g e
lOMoARcPSD|26885763
6. Memory Allocation –
Memory allocation is best managed in these types of systems.
1. Limited Tasks –
Very few tasks run simultaneously, and their concentration is very less on few applications to
avoid errors.
3. Complex Algorithms –
The algorithms are very complex and difficult for the designer to write on.
5. Thread Priority –
It is not good to set thread priority as these systems are very less prone to switching tasks.
76 | P a g e
lOMoARcPSD|26885763
Many (if not most) embedded computing systems do more than one thing that is, the
environment can cause mode changes that in turn cause the embedded system to behave quite
differently. For example, when designing a telephone answering machine,
We can define recording a phone call and operating the user’s control panel as distinct tasks,
because they perform logically distinct operations and they must be performed at very different
rates. These different tasks are part of the system’s functionality, but that application-level
organization of functionality is often reflected in the structure of the program as well.
A process is a single execution of a program. If we run the same program two different times,
we have created two different processes. Each process has its own state that includes not only its
registers but all of its memory. In some OSs, the memory management unit is used to keep each
process in a separate address space. In others, particularly lightweight RTOSs, the processes run
in the same address space. Processes that share the same address space are often called threads.
As shown in Figure, this device is connected to serial ports on both ends. The input to the box is
an uncompressed stream of bytes. The box emits a compressed string of bits on the output serial
line, based on a predefined compression table. Such a box may be used, for example, to
compress data being sent to a modem.
The program’s need to receive and send data at different rates for example, the program may
emit 2 bits for the first byte and then 7 bits for the second byte will obviously find itself reflected
in the structure of the code. It is easy to create irregular, ungainly code to solve this problem; a
more elegant solution is to create a queue of output bits, with those bits being removed from the
queue and sent to the serial port in 8-bit sets.
But beyond the need to create a clean data structure that simplifies the control structure of the
code, we must also ensure that we process the inputs and outputs at the proper rates. For
example, if we spend too much time in packaging and emitting output characters, we may drop
an input character. Solving timing problems is a more challenging problem.
The text compression box provides a simple example of rate control problems. A control panel
on a machine provides an example of a different type of rate control problem,
the asynchronous input.
77 | P a g e
lOMoARcPSD|26885763
The control panel of the compression box may, for example, include a compression mode button
that disables or enables compression, so that the input text is passed through unchanged when
compression is disabled. We certainly do not know when the user will push the compression
mode button the button may be depressed asynchronously relative to the arrival of characters for
compression.
2.4.2. Multirate Systems
Implementing code that satisfies timing requirements is even more complex when multiple rates
of computation must be handled. Multirate embedded computing systems are very common,
including automobile engines, printers, and cell phones. In all these systems, certain operations
must be executed periodically, and each operation is executed at its own rate.
Figure illustrates different ways in which we can define two important requirements on
processes: release time and deadline.
The release time is the time at which the process becomes ready to execute; this is not
necessarily the time at which it actually takes control of the CPU and starts to run. An aperiodic
78 | P a g e
lOMoARcPSD|26885763
process is by definition initiated by an event, such as external data arriving or data computed by
another process.
The release time is generally measured from that event, although the system may want to make
the process ready at some interval after the event itself. For a periodically executed process, there
are two common possibilities.
In simpler systems, the process may become ready at the beginning of the period. More
sophisticated systems, such as those with data dependencies between processes, may set the
release time at the arrival time of certain data, at a time after the start of the period.
A deadline specifies when a computation must be finished. The deadline for an aperiodic
process is generally measured from the release time, since that is the only reasonable time
reference. The deadline for a periodic process may in general occur at some time other than the
end of the period.
Rate requirements are also fairly common. A rate requirement specifies how quickly processes
must be initiated.
The period of a process is the time between successive executions. For example, the period of a
digital filter is defined by the time interval between successive input samples.
79 | P a g e
lOMoARcPSD|26885763
The process’s rate is the inverse of its period. In a multirate system, each process executes at its
own distinct rate.
The most common case for periodic processes is for the initiation interval to be equal to the
period. However, pipelined execution of processes allows the initiation interval to be less than
the period. Figure illustrates process execution in a system with four CPUs.
CPU Metrics
We also need some terminology to describe how the process actually executes.
The initiation time is the time at which a process actually starts executing on the CPU.
The completion time is the time at which the process finishes its work.
The most basic measure of work is the amount of CPU time expended by a process. The CPU
time of process i is called Ci . Note that the CPU time is not equal to the completion time minus
initiation time; several other processes may interrupt execution. The total CPU time consumed
by a set of processes is
T= ∑ Ti
We need a basic measure of the efficiency with which we use the CPU. The simplest and most
direct measure is utilization:
Utilization is the ratio of the CPU time that is being used for useful computations to the total
available CPU time. This ratio ranges between 0 and 1, with 1 meaning that all of the available
CPU time is being used for system purposes. The utilization is often expressed as a percentage. If
we measure the total execution time of all processes over an interval of time t, then the CPU
utilization is
U=T/t.
80 | P a g e
lOMoARcPSD|26885763
It is easy to create irregular, ungainly code to solve this problem; a more elegant solution
is to create a queue of out put bits, with those bits being removed from the queue and sent to the
serial port in 8-bit sets. But beyond the need to create a clean data structure that simplifies the
control structure of the code, we must also ensure that we process the inputs and outputs at the
proper rates. For example, if we spend too much time in packaging and emitting output
characters, we may drop an input character. Solving timing problems is a more challenging
problem.
The text compression box provides a simple example of rate control problems. A control
panel on a machine provides an example of a different type of rate con- troll problem, the
asynchronous input. The control panel of the compression box may, for example, include a
compression mode button that disables or enables com- pression, so that the input text is passed
through unchanged when compression is disabled. We certainly do not know when the user will
push the compression modebutton— the button may be depressed a synchronously relative to the
arrival of characters for compression. We do know, however, that the button will be depressed at
a much lower rate than characters will be received, since it is not physically possible for a person
to repeatedly depress a button at even slow serial line rates. Keeping up with the input and output
data while checking on the button can introduce some very complex control code in to the
program. Sampling the button’s state too slowly can cause the machine to miss a button
depression entirely, but sampling it too frequently and duplicating a data value can cause the
machine to in correctly compress data.
One solution is to introduce a counter in to the main compression loop, so that a subroutine to
check the input button is called once every times the compression loop is executed. But this
solution does not work when either the compression loop or the button-handling routine has
81 | P a g e
lOMoARcPSD|26885763
highly variable execution times—if the execution time of either varies significantly, it will cause
the other to execute later than expected, possibly causing data to be lost. We need to be able to
keep track of these two different tasks separately, applying different timing requirements to each.
This is the sort of control that processes allow. The above two examples illustrate how
requirements on timing and execution rate can create major problems in programming. When
code is written to satisfy several different timing requirements at once, the control structures
necessary to get any sort of solution become very complex very quickly. Worse, such complex
control is usually quite difficult to verify for either functional or timing properties.
Utilization is the ratio of the CPU time that is being used for useful computations to the total
available CPUtime. This ratio ranges between 0 and 1, with 1 meaning that all of the available
CPU time is being used for system purposes. The utilization is often expressed as a percentage. If
82 | P a g e
lOMoARcPSD|26885763
we measure the total execution time of all processes over an interval of time t, then the CPU
utilization is U/T
Schedulability means whether there exists a schedule of execution for the processes in a
system that satisfies all their timing requirements. In general, we must construct a schedule to
show schedulability, but in some cases we can eliminate some sets of processes as unschedulable
using some very simple tests. Utilization is one of the key metrics in evaluating a scheduling
policy. Our most basic require- ment is that CPU utilization be no more than 100% since we
can’t use the CPU more than100% of the time.
When we evaluate the utilization of the CPU, we generally do so over a finite period that covers
all possible combinations of process executions. For periodic processes, the length of time that
must be considered is the hyper period, which is the least-common multiple of the periods of all
the processes. If we evaluate the hyper period, we are sure to have considered all possible
combinations of the periodic processes.
designer. Here is a very simple program that runs our process subroutines repeatedly: A timer is
a much more reliable way to control execution of the loop. We would probably use the timer to
generate periodic interrupts. Let’s assume for the moment that the pall() function is called by the
timer’s interrupt handler. Then this code will execute each process once after a timer interrupt:
83 | P a g e
lOMoARcPSD|26885763
voidpall()
{
p1();
p2();
}
But what happens when a process runs too long? The timer’s interrupt will cause the CPU’s
interrupt system to mask its interrupts, so the interrupt will not occur until after the pall() routine
returns. As a result, the next iteration will start late. This is a serious problem, but we will have
to wait for further refinements before we can fix it.
Our next problem is to execute different processes at different rates. If we have several timers,
we can set each timer to a different rate. We could then use a function to collect all the processes
that run at that rate:
voidpA()
{
/*processesthatrunatrateA*/
p1();
p3();
}
void pB()
{
/*processesthatrunatrateB*/
}
This solution allows us to execute processes at rates that are simple multiples of each other.
However, when the rates are n’t related by a simple ratio, the counting process becomes more
complex and more likely to contain bugs. We have developed somewhat more reliable code, but
this programming style is still limited in capability and prone to bugs. To improve both the
capabilities and reliability of our systems, we need to invent the RTOS.
84 | P a g e
lOMoARcPSD|26885763
Context Switch
Context switching in an operating system involves saving the context or state of a running
process so that it can be restored later, and then loading the context or state of another. process
and run it.
Context Switching refers to the process/method used by the system to change the process from
one state to another using the CPUs present in the system to perform its job.
Example – Suppose in the OS there (N) numbers of processes are stored in a Process Control
Block(PCB). like The process is running using the CPU to do its job. While a process is running,
other processes with the highest priority queue up to use the CPU to complete their job.
The operating system’s need for context switching is explained by the reasons listed below.
1. One process does not directly switch to another within the system. Context switching makes
it easier for the operating system to use the CPU’s resources to carry out its tasks and store its
context while switching between multiple processes.
2. Context switching enables all processes to share a single CPU to finish their execution and
store the status of the system’s tasks. The execution of the process begins at the same place
where there is a conflict when the process is reloaded into the system.
3. Context switching enables all processes to share a single CPU to finish their execution and
store the status of the system’s tasks. The execution of the process begins at the same place
where there is a conflict when the process is reloaded into the system.
4. Context switching only allows a single CPU to handle multiple processes requests parallelly
without the need for any additional processors.
Interrupts: When a CPU requests that data be read from a disc, if any interruptions occur,
context switching automatically switches to a component of the hardware that can handle the
interruptions more quickly.
85 | P a g e
lOMoARcPSD|26885763
Multitasking: The ability for a process to be switched from the CPU so that another process can
run is known as context switching. When a process is switched, the previous state is retained so
that the process can continue running at the same spot in the system.
Kernel/User Switch: This trigger is used when the OS needed to switch between the user mode
and kernel mode.
When switching between user mode and kernel/user mode is necessary, operating systems use
the kernel/user switch.
86 | P a g e
lOMoARcPSD|26885763
87 | P a g e
lOMoARcPSD|26885763
In priority scheduling, a number is assigned to each process that indicates its priority
level.
Lower the number, higher is the priority.
In this type of scheduling algorithm, if a newer process arrives, that is having a higher
priority than the currently running process, then the currently running process is
preempted.
Step 2) At time 2, no new process arrives, so you can continue with P1. P2 is in the waiting
queue.
88 | P a g e
lOMoARcPSD|26885763
Step 3) At time 3, no new process arrives so you can continue with P1. P2 process still in the
waiting queue.
Step 6) At time=6, P3 arrives. P3 is at higher priority (1) compared to P2 having priority (2). P2
is preempted, and P3 begins its execution.
89 | P a g e
lOMoARcPSD|26885763
Step 7) At time 7, no-new process arrives, so we continue with P3. P2 is in the waiting queue.
90 | P a g e
lOMoARcPSD|26885763
Step 10) At time interval 10, no new process comes, so we continue with P3
Step 11) At time=11, P4 arrives with priority 4. P3 has higher priority, so it continues its
execution.
Process Priority Burst time Arrival time
P1 1 4 0
P2 2 1 out of 3 pending 0
P3 1 2 out of 7 pending 6
P4 3 4 11
P5 2 2 12
91 | P a g e
lOMoARcPSD|26885763
Step 13) At time=13, P3 completes execution. We have P2,P4,P5 in ready queue. P2 and P5
have equal priority. Arrival time of P2 is before P5. So P2 starts execution.
Process Priority Burst time Arrival time
P1 1 4 0
P2 2 1 out of 3 pending 0
P3 1 7 6
P4 3 4 11
P5 2 2 12
Step 14) At time =14, the P2 process has finished its execution. P4 and P5 are in the waiting
state. P5 has the highest priority and starts execution.
92 | P a g e
lOMoARcPSD|26885763
Step 16) At time= 16, P5 is finished with its execution. P4 is the only process left. It starts
execution.
Step 17) At time =20, P5 has completed execution and no process is left.
Step 18) Let’s calculate the average waiting time for the above example.
Waiting Time = start time – arrival time + wait time for next burst
P1 = o - o = o
P2 =4 - o + 7 =11
P3= 6-6=0
P4= 16-11=5
Average Waiting time = (0+11+0+5+2)/5 = 18/5= 3.6
93 | P a g e
lOMoARcPSD|26885763
This method provides a good mechanism where the relative important of each process
may be precisely defined.
Suitable for applications with fluctuating time and resource requirements.
Each thread in the system may run using any method. The methods are effective on a per-thread
basis, not on a global basis for all threads and processes on a node.
Remember that the FIFO and round-robin scheduling policies apply only when two or more
threads that share the same priority are READY (i.e., the threads are directly competing with
each other). The sporadic method, however, employs a “budget” for a thread's execution. In all
cases, if a higher-priority thread becomes READY, it immediately preempts all lower-priority
threads.
In the following diagram, three threads of equal priority are READY. If Thread A blocks,
Thread B will run. Although a thread inherits its scheduling policy from its parent process, the
thread can request to change the algorithm applied by the kernel.
94 | P a g e
lOMoARcPSD|26885763
95 | P a g e