0% found this document useful (0 votes)
3 views20 pages

UNIT II Notes-1

The document discusses various aspects of memory and I/O device interfacing in embedded systems, including memory organization, types of memory, and the function of I/O devices like timers and keyboards. It also covers the importance of multitasking, context switching, and scheduling in real-time operating systems (RTOS) to manage multiple tasks efficiently. Additionally, it highlights the complexities involved in programming these systems to ensure proper timing and execution rates for different tasks.

Uploaded by

infojazz17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views20 pages

UNIT II Notes-1

The document discusses various aspects of memory and I/O device interfacing in embedded systems, including memory organization, types of memory, and the function of I/O devices like timers and keyboards. It also covers the importance of multitasking, context switching, and scheduling in real-time operating systems (RTOS) to manage multiple tasks efficiently. Additionally, it highlights the complexities involved in programming these systems to ensure proper timing and execution rates for different tasks.

Uploaded by

infojazz17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

UNIT II

Memory And I/O Devices Interfacing – Programming Embedded


Systems in C – Need For RTOS – Multiple Tasks and Processes –
Context Switching – Priority Based Scheduling Policies.

EMBEDDED C PROGRAMMING

1. Memory And I/O Devices Interfacing


Memory Device Organization
The most basic way to characterize a memory is by its capacity, such as 256 MB.
However, manufacturers usually make several versions of a memory of a given size, each
with a different data width. For example, a 256-MB memory may be available in two
versions:

● As a 64 M 4-bit array, a single memory access obtains an 8-bit data item, with a
maximum of 226 different addresses.
● As a 32 M 8-bit array,a single memory access obtains a 1-bit data item, with a maximum
of 223 different addresses.
The height/width ratio of a memory is known as its aspect ratio. The best aspect ratio depends
on the amount of memory required. Internally, the data are stored in a two-dimensional array
of memory cells as shown in Figure 4.15. Because the array is stored in two dimensions,the
n-bit address received by the chip is split into a row and a column address (with n r c).

SDRAMs use a separate refresh signal to control refreshing. DRAM has to be refreshed
roughly once per millisecond. Rather than refresh the entire memory at once, DRAMs refresh
part of the memory at a time. When a section of memory is being refreshed, it cannot be
accessed until the refresh is complete. The memory refresh occurs over fairly few seconds so
that each section is refreshed every few microseconds. SDRAMs include registers that
control the mode in which the SDRAM operates. SDRAMs support burst modes that allow
several sequential addresses to be accessed by sending only one address. SDRAMs generally
also support an interleaved mode that exchanges pairs of bytes.

Read-Only Memories
Read-only memories (ROMs) are preprogrammed with fixed data. They are very
useful in embedded systems since a great deal of the code, and perhaps some data, does not
change over time. Read-only memories are also less sensitive to radiation- induced errors.
There are several varieties of ROM available. The first-level distinction to be made is
between factory-programmed ROM (sometimes called mask-programmed ROM ) and
fieldprogrammable ROM . Factory-programmed ROMs are ordered from the factory with
particular programming. ROMs can typically be ordered in lots of a few thousand, but clearly
factory programming is useful only when the ROMs are to be installed in some quantity.
Field-programmable ROMs, on the other hand, can be programmed in the lab. Flash memory
is the dominant form of field-programmable ROM and is electrically erasable. Flash memory
uses standard system voltage for erasing and programming.

I/O DEVICES

Some of these devices are often found as on-chip devices in microcontrollers; others
are generally implemented separately but are still com- monly used. Looking at a few
important devices now will help us understand both the requirements of device interfacing .
Timers and Counters
Timers and counters are distinguished from one another largely by their use, not their
logic. Both are built from adder logic with registers to hold the current value, with an
increment input that adds one to the current register value. However, a timer has its count
connected to a periodic clock signal to measure time intervals, while a counter has its count
input connected to an aperiodic signal in order to count the number of occurrences of some
external event. Because the same logic can be used for either purpose, the device is often
called a counter/timer.
Figure shows enough of the internals of a counter/timer to illustrate its operation. An n-bit
counter/timer uses an n-bit register to store the current state of the count and an array of half
subtractors to decrement the count when the count signal is asserted. Combinational logic
checks when the count equals zero; the done output signals the zero count. It is often useful to
be able to control the time-out, rather than require exactly 2n events to occur. For this
purpose, a reset register provides the value with which the count register is to be loaded. The
counter/timer provides logic to load the reset register. Most counters provide both cyclic and
acyclic modes of operation. In the cyclic mode, once the counter reaches the done state, it is
automatically reloaded and the counting process continues. In acyclic mode, the
counter/timer waits for an explicit signal from the microprocessor to resume counting.
A watchdog timer is an I/O device that is used for internal operation of a system. As shown
in Figure 4.18, the watchdog timer is connected into the CPU bus and also to the CPU’s reset
line. The CPU’s software is designed to periodically reset the watchdog timer, before the
timer ever reaches its time-out limit. If the watchdog timer ever does reach that limit, its time-
out action is to reset the processor. In that case, the presumption is that either a software flaw
or hardware problem has caused the CPU to misbehave. Rather than diagnose the problem,
the system is reset to get it operational as quickly as possible.

Keyboards
A keyboard is basically an array of switches, but it may include some internal logic to
help simplify the interface to the microprocessor. In this chapter, we build our understanding
from a single switch to a microprocessor-controlled keyboard.

A switch uses a mechanical contact to make or break an electrical circuit. The major problem
with mechanical switches is that they bounce as shown in Figure 4.19. When the switch is
depressed by pressing on the button attached to the switch’s arm, the force of the depression
causes the contacts to bounce several times until they settle down. If this is not corrected, it
will appear that the switch has been pressed several times, giving false inputs. A hardware
debouncing circuit can be built using a one-shot timer. Software can also be used to debounce
switch inputs. A raw keyboard can be assembled from several switches. Each switch in a raw
keyboard has its own pair of terminals, making raw keyboards impractical when a large
number of keys is required.
More expensive keyboards, such as those used in PCs, actually contain a microprocessor to
preprocess button inputs. PC keyboards typically use a 4-bit microprocessor to provide the
interface between the keys and the computer. The microprocessor can provide debouncing,
but it also provides other functions as well.
Touchscreens
A touchscreen is an input device overlaid on an output device. The touchscreen
registers the position of a touch to its surface. By overlaying this on a display, the user can
react to information shown on the display.
The two most common types of touchscreens are resistive and capacitive. A resistive
touchscreen uses a two-dimensional voltmeter to sense position. As shown in Figure 4.23, the
touchscreen consists of two conductive sheets separated by spacer balls. The top conductive
sheet is flexible so that it can be pressed to touch the bottom sheet. A voltage is applied
across the sheet; its resistance causes a voltage gradient to appear across the sheet. The top
sheet samples the conductive sheet’s applied voltage at the contact point. An analog/digital
converter is used to measure the voltage and resulting position. The touchscreen alternates
between x and y position sensing by alternately applying horizontal and vertical voltage
gradients.
2. MULTIPLE TASKS AND PROCESSES:

Most embedded systems require functionality and timing that is too complex to
embody in a single program. We break the system into multiple tasks in order to manage
when things happen. In this section we will develop the basic abstractions that will be
manipulated by the RTOS to build multirate systems.
Tasks and Processes
Many (if not most) embedded computing systems do more than one thing—that is, the
environment can cause mode changes that in turn cause the embedded system to behave quite
differently. For example, when designing a telephone answering machine, we can define
recording a phone call and operating the user’s control panel as distinct tasks, because they
perform logically distinct operations and they must be performed at very different rates.
These different tasks are part of the system’s functionality, but that application-level
organization of functionality is often reflected in the structure of the program as well.
A process is a single execution of a program. If we run the same program two
different times, we have created two different processes. Each process hasits own state that
includes not only its registers but all of its memory. In some OSs, the memory management
unit is used to keep each process in a separate address space. In others, particularly
lightweight RTOSs, the processes run in the same address space. Processes that share the
same address space are often called threads.
To understand why the separation of an application into tasks may be reflected in the
program structure, consider how we would build a stand-alone compression unit based on the
compression algorithm. As shown in Figure, this device is connected to serial ports on both
ends. The input to the box is an uncompressed stream of bytes. The box emits a compressed
string of bits on the output serial line, based on a predefined compression table. Such a box
may be used, for example, to compress data being sent to a modem. The program’s need to
receive and send data at different rates—for example, the program may emit 2 bits for the
first byte and then 7 bits for the second byte will obviously find itself reflected in the
structure of the code. It is easy to create irregular, ungainly code to solve this problem; a
more elegant solution is to create a queue of output bits, with those bits being removed from
the queue and sent to the serial port in 8-bit sets. But beyond the need to create a clean data
structure that simplifies the control structure of the code,we must also ensure that we process
the inputs and outputs at the proper rates. For example, if we spend too much time in
packaging and emitting output characters, we may drop an input character. Solving timing
problems is a more challenging problem.
The text compression box provides a simple example of rate control problems. A
control panel on a machine provides an example of a different type of rate control
problem,the asynchronous input. The control panel of the compression box may, for
example,include a compression mode button that disables or enables compression, so that the
input text is passed through unchanged when compression is disabled. We certainly do not
know when the user will push the compression mode button—the button may be depressed
asynchronously relative to the arrival of characters for compression. We do know, however,
that the button will be depressed at a much lower rate than characters will be received, since
it is not physically possible for a person to repeatedly depress a button at even slow serial line
rates. Keeping up with the input and output data while checking on the button can introduce
some very complex control code into the program. Sampling the button’s state too slowly can
cause the machine to miss a button depression entirely, but sampling it too frequently and
duplicating a data value can cause the machine to incorrectly compress data. One solution is
to introduce a counter into the main compression loop, so that a subroutine to check the input
button is called once every n times the compression loop is executed. But this solution does
not work when either the compression loop or the button-handling routine has highly variable
execution times—if the execution time of either varies significantly, it will cause the other to
execute later than expected, possibly causing data to be lost. We need to be able to keep track
of these two different tasks separately, applying different timing requirements to each. This is
the sort of control that processes allow. The above two examples illustrate how requirements
on timing and execution rate can create major problems in programming. When code is
written to satisfy several different timing requirements at once, the control structures
necessary to get any sort of solution become very complex very quickly. Worse, such
complex control is usually quite difficult to verify for either functional or timing properties.

3. CONTEXT SWITCHING
In computing, a context switch is the process of storing and restoring the state
(context) of a process so that execution can be resumed from the same point at a later time.
This enables multiple processes to share a single CPU and is an essential feature of a
multitasking operating system. What constitutes the context is determined by the processor
and the operating system. Context switches are usually computationally intensive, and much
of the design of operating systems is to optimize the use of context switches. Switching from
one process to another requires a certain amount of time for doing the administration - saving
and loading registers and memory maps, updating various tables and lists etc. A context
switch can mean a register context switch, a task context switch, a stack frame switch, a
thread context switch, or a process context switch.
Multitasking
Most commonly, within some scheduling scheme, one process needs to be switched
out of the CPU so another process can run. This context switch can be triggered by the
process making itself unrunnable, such as by waiting for an I/O or synchronization operation
to complete. On a pre-emptive multitasking system, the scheduler may also switch out
processes which are still runnable. To prevent other processes from being starved of CPU
time, preemptive schedulers often configure a timer interrupt to fire when a process exceeds
its time slice. This interrupt ensures that the scheduler will gain control to perform a context
switch.
Interrupt handling
Modern architectures are interrupt driven. This means that if the CPU requests data
from a disk, for example, it does not need to busy-wait until the read is over; it can issue the
request and continue with some other execution. When the read is over, the CPU can be
interrupted and presented with the read. For interrupts, a program called an interrupt handler
is installed, and it is the interrupt handler that handles the interrupt from the disk.
When an interrupt occurs, the hardware automatically switches a part of the context (at least
enough to allow the handler to return to the interrupted code). The handler may save
additional context, depending on details of the particular hardware and software designs.
Often only a minimal part of the context is changed in order to minimize the amount of time
spent handling the interrupt. The kernel does not spawn or schedule a special process to
handle interrupts, but instead the handler executes in the (often partial) context established at
the beginning of interrupt handling. Once interrupt servicing is complete, the context in effect
before the interrupt occurred is restored so that the interrupted process can resume execution
in its proper state.
User and kernel mode switching
When a transition between user mode and kernel mode is required in an operating
system, a context switch is not necessary; a mode transition is not by itself a context switch.
However, depending on the operating system, a context switch may also take place at this
time.
Steps
In a switch, the state of the first process must be saved somehow, so that, when the
scheduler gets back to the execution of the first process, it can restore this state and continue.
The state of the process includes all the registers that the process may be using, especially the
program counter, plus any other operating system specific data that may be necessary. This
data is usually stored in a data structure called a process control block (PCB), or switchframe.
In order to switch processes, the PCB for the first process must be created and saved. The
PCBs are sometimes stored upon a per-process stack in kernel memory (as opposed to the
user-mode call stack), or there may be some specific operating system defined data structure
for this information. Since the operating system has effectively suspended the execution of
the first process, it can now load the PCB and context of the second process. In doing so, the
program counter from the PCB is loaded, and thus execution can continue in the new process.
New processes are chosen from a queue or queues. Process and thread priority can influence
which process continues execution, with processes of the highest priority checked first for
ready threads to execute.
Software vs hardware context switching
Context switching can be performed primarily by software or hardware. Some
processors, like the Intel 80386 and its successors,[1] have hardware support for context
switches, by making use of a special data segment designated the Task State Segment or TSS.
A task switch can be explicitly triggered with a CALL or JMP instruction targeted at a TSS
descriptor in the global descriptor table. It can occur implicitly when an interrupt or exception
is triggered if there's a task gate in the interrupt descriptor table. When a task switch occurs
the CPU can automatically load the new state from the TSS. As with other tasks performed in
hardware, one would expect this to be rather fast; however, mainstream operating systems,
including Windows and Linux, [2] do not use this feature.
This is due to mainly two reasons:
● hardware context switching does not save all the registers (only general purpose
registers, not floating point registers — although the TS bit is automatically turned on
in the CR0 control register, resulting in a fault when executing floating point
instructions and giving the OS the opportunity to save and restore the floating point
state as needed).
● associated performance issues, e.g., software context switching can be selective and
store only those registers that need storing, whereas hardware context switching stores
nearly all registers whether they are required or not.

4. PROGRAMMING EMBEDDED SYSTEMS IN ASSEMBLY AND C


C and Assembly
Many programmers are more comfortable writing in C, and for good reason: C is a
mid-level language (in comparison to Assembly, which is a low-level language), and spares
the programmers some of the details of the actual implementation. However, there are some
low-level tasks that either can be better implemented in assembly, or can only be
implemented in assembly language. Also, it is frequently useful for the programmer to look at
the assembly output of the C compiler, and hand-edit, or hand optimize the assembly code in
ways that the compiler cannot. Assembly is also useful for time-critical or real-time
processes, because unlike with high-level languages, there is no ambiguity about how the
code will be compiled. The timing can be strictly controlled, which is useful for writing
simple device drivers. This section will look at multiple techniques for mixing C and
Assembly program development.
Inline Assembly
One of the most common methods for using assembly code fragments in a C
programming project is to use a technique called inline assembly. Inline assembly is invoked
in different compilers in different ways. Also, the assembly language syntax used in the inline
assembly depends entirely on the assembly engine used by the C compiler. Microsoft C++,
for instance, only accepts inline assembly commands in MASM syntax, while GNU GCC
only accepts inline assembly in GAS syntax (also known as AT&T syntax). This page will
discuss some of the basics of mixed-language programming in some common compilers.
Microsoft C Compiler
Turbo C Compiler
GNU GCC Compiler
Borland C Compiler
Linked Assembly
When an assembly source file is assembled by an assembler, and a C source file is compiled
by a C compiler, those two object files can be linked together by a linker to form the final
executable. The beauty of this approach is that the assembly files can written using any
syntax and assembler that the programmer is comfortable with. Also, if a change needs to be
made in the assembly code, all of that code exists in a separate file, that the programmer can
easily access. The only disadvanges of mixing assembly and C in this way are that a)both the
assembler and the compiler need to be run, and b) those files need to be manually linked
together by the programmer. These extra steps are comparatively easy, although it does mean
that the programmer needs to learn the command-line syntax of the compiler, the assembler,
and the linker.
Inline Assembly vs. Linked Assembly
Advantages of inline assembly:
Short assembly routines can be embedded directly in C function in a C code file. The
mixedlanguage file then can be completely compiled with a single command to the C
compiler (as opposed to compiling the assembly code with an assembler, compiling the C
code with the C Compiler, and then linking them together). This method is fast and easy. If
the in-line assembly is embedded in a function, then the programmer doesn't need to worry
about #Calling_Conventions, even when changing compiler switches to a different calling
convention.
Advantages of linked assembly:
If a new microprocessor is selected, all the assembly commands are isolated in a
".asm" file. The programmer can update just that one file -- there is no need to change any of
the ".c" files (if they are portably written).
Calling Conventions
When writing separate C and Assembly modules, and linking them with your linker, it
is important to remember that a number of high-level C constructs are very precisely defined,
and need to be handled correctly by the assembly portions of your program. Perhaps the
biggest obstacle to mixed-language programming is the issue of function calling conventions.
C functions are all implemented according to a particular convention that is selected by the
programmer (if you have never "selected" a particular calling convention, it's because your
compiler has a default setting). This page will go through some of the common calling
conventions that the programmer might run into, and will describe how to implement these in
assembly language.
Code compiled with one compiler won't work right when linked to code compiled with a
different calling convention. If the code is in C or another high-level language (or assembly
language embedded in-line to a C function), it's a minor hassle -- the programmer needs to
pick which compiler / optimization switches she wants to use today, and recompile every part
of the program that way. Converting assembly language code to use a different calling
convention takes more manual effort and is more bug-prone.
Unfortunately, calling conventions are often different from one compiler to the next -- even
on the same CPU. Occasionally the calling convention changes from one version of a
compiler to the next, or even from the same compiler when given different "optimization"
switches.
Unfortunately, many times the calling convention used by a particular version of a particular
compiler is inadequately documented. So assembly-language programmers are forced to use
reverse engineering techniques to figure out the exact details they need to know in order to
call functions written in C, and in order to accept calls from functions written in C.
The typical process is: write a ".c" file with stubs ... details??? ... ... exactly the same number
and type of inputs and outputs that you want the assembly-language function to have.

∙ Compile that file with the appropriate switches to give a mixed assembly-language-withc-
in-comments file (typically a ".cod" file). (If your compiler can't produce an assembly
language file, there is the tedious option of disassembling the binary ".obj" machine-code
file).

∙ Copy that ".cod" file to a ".asm" file. (Sometimes you need to strip out the compiled hex
numbers and comment out other lines to turn it into something the assembler can handle).

∙ Test the calling convention -- compile the ".asm" file to an ".obj" file, and link it (instead
of the stub ".c" file) to the rest of the program. Test to see that "calls" work properly.

∙ Fill in your ".asm" file -- the ".asm" file should now include the appropriate header and
footer on each function to properly implement the calling convention. Comment out the stub
code in the middle of the function and fill out the function with your assembly language
implementation.

∙ Test. Typically a programmer single-steps through each instruction in the new code,
making sure it does what they wanted it to do.
Parameter Passing
Normally, parameters are passed between functions (either written in C or in
Assembly) via the stack. For example, if a function foo1() calls a function foo2() with 2
parameters (say characters x and y), then before the control jumps to the starting of foo2(),
two bytes (normal size of a character in most of the systems) are filled with the values that
need to be passed. Once control jumps to the new function foo2(), and you use the values
(passed as parameters) in the function, they are retrieved from the stack and used.
There are two parameter passing techniques in use,

● Pass by Value

● Pass by Reference

Parameter passing techniques can also use


right-to-left (C-style)
left-to-right (Pascal style)
On processors with lots of registers (such as the ARM and the Sparc), the standard calling
convention puts *all* the parameters (and even the return address) in registers.
On processors with inadequate numbers of registers (such as the 80x86 and the M8C), all
calling conventions are forced to put at least some parameters on the stack or elsewhere in
RAM.
Some calling conventions allow "re-entrant code".
Pass by Value
With pass-by-value, a copy of the actual value (the literal content) is passed. For
example, if you have a function that accepts two characters like
void foo(char x, char y){
x = x + 1;
y = y + 2;
putchar(x);
putchar(y);
}
and you invoke this function as follows
char a,b;
a='A';
b='B';
foo(a,b);
then the program pushes a copy of the ASCII values of 'A' and 'B' (65 and 66 respectively)
onto the stack before the function foo is called. You can see that there is no mention of
variables 'a' or 'b' in the function foo(). So, any changes that you make to those two values in
foo will not affect the values of a and b in the calling function.
Pass by Reference
Imagine a situation where you have to pass a large amount of data to a function and
apply the modifications, done in that function, to the original variables. An example of such a
situation might be a function that converts a string with lower case alphabets to upper case. It
would be an unwise decision to pass the entire string (particularly if it is a big one) to the
function, and when the conversion is complete, pass the entire result back to the calling
function. Here we pass the address of the variable to the function. This has two advantages,
one, you don't have to pass huge data, therby saving execution time and two, you can work
on the data right away so that by the end of the function, the data in the calling function is
already modified.
But remember, any change you make to the variable passed by reference will result in the
original variable getting modified. If that's not what you wanted, then you must manually
copy the variable before calling the function.
80x86 / Pentium
do I need to say anything about compact/small/large/huge here? ...
CDECL
In the CDECL calling convention the following holds:

● Arguments are passed on the stack in Right-to-Left order, and return values are passed in
eax.

∙ The calling function cleans the stack. This allows CDECL functions to have variablelength
argument lists (aka variadic functions). For this reason the number of arguments is not
appended to the name of the function by the compiler, and the assembler and the linker are
therefore unable to determine if an incorrect number of arguments is used.
Variadic functions usually have special entry code, generated by the va_start(), va_arg() C
pseudo-functions.
Consider the following C instructions:
_cdecl int MyFunction1(int a, int b)
{
return a + b;
}
and the following function call:
x = MyFunction1(2, 3);
These would produce the following assembly listings, respectively:
:_MyFunction1
push ebp
mov ebp, esp
mov eax, [ebp + 8]
mov edx, [ebp + 12]
add eax, edx
pop ebp
ret
and
push 3
push 2
call _MyFunction1
add esp, 8
When translated to assembly code, CDECL functions are almost always prepended with an
underscore (that's why all previous examples have used "_" in the assembly code).
STDCALL
STDCALL, also known as "WINAPI" (and a few other names, depending on where you
are reading it) is used almost exclusively by Microsoft as the standard calling convention for
the Win32 API. Since STDCALL is strictly defined by Microsoft, all compilers that
implement it do it the same way.

● STDCALL passes arguments right-to-left, and returns the value in eax. (The Microsoft
documentation erroneously claims that arguments are passed left-to-right, but this is not
the case.)

∙ The called function cleans the stack, unlike CDECL. This means that STDCALL doesn't
allow variable-length argument lists.
Consider the following C function:
_stdcall int MyFunction2(int a, int b)
{
return a + b;
}
and the calling instruction:
x = MyFunction2(2, 3);
These will produce the following respective assembly code fragments:
:_MyFunction@8
push ebp
mov ebp, esp
mov eax, [ebp + 8]
mov edx, [ebp + 12]
add eax, edx
pop ebp
ret 8
and
push 3
push 2
call _MyFunction@8
There are a few important points to note here:
1. In the function body, the ret instruction has an (optional) argument that indicates how
many bytes to pop off the stack when the function returns.
2. STDCALL functions are name-decorated with a leading underscore, followed by an @,
and then the number (in bytes) of arguments passed on the stack. This number will always be
a multiple of 4, on a 32-bit aligned machine.
FASTCALL
The FASTCALL calling convention is not completely standard across all compilers,
so it should be used with caution. In FASTCALL, the first 2 or 3 32-bit (or smaller)
arguments are passed in registers, with the most commonly used registers being edx, eax, and
ecx. Additional arguments, or arguments larger than 4-bytes are passed on the stack, often in
Right-to-Left order (similar to CDECL). The calling function most frequently is responsible
for cleaning the stack, if needed.
Because of the ambiguities, it is recommended that FASTCALL be used only in situations
with 1, 2, or 3 32-bit arguments, where speed is essential.
The following C function:
_fastcall int MyFunction3(int a, int b)
{
return a + b;
}
and the following C function call:
x = MyFunction3(2, 3);
Will produce the following assembly code fragments for the called, and the calling functions,
respectively:
push ebp
mov ebp, esp ;many compilers create a stack frame even if it isn't used
add eax, edx ;a is in eax, b is in edx
pop ebp
ret
and
;the calling function
mov eax, 2
mov edx, 3
call @MyFunction3@8
The name decoration for FASTCALL prepends an @ to the function name, and follows the
function name with @x, where x is the number (in bytes) of arguments passed to the
function.
Many compilers still produce a stack frame for FASTCALL functions, especially in
situations where the FASTCALL function itself calls another subroutine. However, if a
FASTCALL function doesn't need a stack frame, optimizing compilers are free to omit it.
5. Real-time operating system (RTOS): Components, Types, Examples

Real-time operating system (RTOS) is an operating system intended to serve real


time application that process data as it comes in, mostly without buffer delay. The full form
of RTOS is Real time operating system.
In a RTOS, Processing time requirement are calculated in tenths of seconds increments of
time. It is time-bound system that can be defined as fixed time constraints. In this type of
system, processing must be done inside the specified constraints. Otherwise, the system will
fail.
Use of RTOS
Here are important reasons for using RTOS:

● It offers priority-based scheduling, which allows you to separate analytical processing


from non-critical processing.
● The Real time OS provides API functions that allow cleaner and smaller application
code.
● Abstracting timing dependencies and the task-based design results in fewer
interdependencies between modules.
● RTOS offers modular task-based development, which allows modular task-based
testing.
● The task-based API encourages modular development as a task, will typically have a
clearly defined role. It allows designers/teams to work independently on their parts of
the project.
● An RTOS is event-driven with no time wastage on processing time for the event
which is not occur

Components of RTOS

The Scheduler: This component of RTOS tells that in which order, the tasks can be executed
which is generally based on the priority.

Symmetric Multiprocessing (SMP): It is a number of multiple different tasks that can be


handled by the RTOS so that parallel processing can be done.

Function Library: It is an important element of RTOS that acts as an interface that helps you
to connect kernel and application code. This application allows you to send the requests to the
Kernel using a function library so that the application can give the desired results.

Memory Management: this element is needed in the system to allocate memory to every
program, which is the most important element of the RTOS.

Fast dispatch latency: It is an interval between the termination of the task that can be
identified by the OS and the actual time taken by the thread, which is in the ready queue, that
has started processing.

User-defined data objects and classes: RTOS system makes use of programming languages
like C or C++, which should be organized according to their operation.
Types of RTOS
Three types of RTOS systems are:

Hard Real Time :


In Hard RTOS, the deadline is handled very strictly which means that given task must
start executing on specified scheduled time, and must be completed within the assigned time
duration.

Example: Medical critical care system, Aircraft systems, etc.

Firm Real time:


These type of RTOS also need to follow the deadlines. However, missing a deadline
may not have big impact but could cause undesired affects, like a huge reduction in quality of
a product.

Example: Various types of Multimedia applications.

Soft Real Time:


Soft Real time RTOS, accepts some delays by the Operating system. In this type of
RTOS, there is a deadline assigned for a specific job, but a delay for a small amount of time
is acceptable. So, deadlines are handled softly by this type of RTOS.

Example: Online Transaction system and Livestock price quotation System.

Features of RTOS
Here are important features of RTOS:

● Occupy very less memory


● Consume fewer resources
● Response times are highly predictable
● Unpredictable environment
● The Kernel saves the state of the interrupted task ad then determines which task it
should run next.
● The Kernel restores the state of the task and passes control of the CPU for that task.

Factors for selecting an RTOS


Here, are essential factors that you need to consider for selecting RTOS:

● Performance: Performance is the most important factor required to be considered


while selecting for a RTOS.
● Middleware: if there is no middleware support in Real time operating system, then
the issue of time-taken integration of processes occurs.
● Error-free: RTOS systems are error-free. Therefore, there is no chance of getting an
error while performing the task.
● Embedded system usage: Programs of RTOS are of small size. So we widely use
RTOS for embedded systems.
● Maximum Consumption: we can achieve maximum Consumption with the help of
RTOS.
● Task shifting: Shifting time of the tasks is very less.

● Unique features: A good RTS should be capable, and it has some extra features like
how it operates to execute a command, efficient protection of the memory of the
system, etc.
● 24/7 performance: RTOS is ideal for those applications which require to run 24/7.

Applications of Real Time Operating System


Real-time systems are used in:

● Airlines reservation system.


● Air traffic control system.
● Systems that provide immediate updating.
● Used in any system that provides up to date and minute information on stock prices.
● Defense application systems like RADAR.
● Networked Multimedia Systems
● Command Control Systems
● Internet Telephony
● Anti-lock Brake Systems
● Heart Pacemaker

Disadvantages of RTOS
Here, are drawbacks/cons of using RTOS system:

● RTOS system can run minimal tasks together, and it concentrates only on those
applications which contain an error so that it can avoid them.
● RTOS is the system that concentrates on a few tasks. Therefore, it is really hard for
these systems to do multi-tasking.
● Specific drivers are required for the RTOS so that it can offer fast response time to
interrupt signals, which helps to maintain its speed.
● Plenty of resources are used by RTOS, which makes this system expensive.

● The tasks which have a low priority need to wait for a long time as the RTOS
maintains the accuracy of the program, which are under execution.
● Minimum switching of tasks is done in Real time operating systems.
● It uses complex algorithms which is difficult to understand.
● RTOS uses lot of resources, which sometimes not suitable for the system.

6. SCHEDULING POLICIES

PRIORITY-BASED SCHEDULING
Now that we have a priority as context switching mechanism, we have to determine
an algorithm by which to assign n priorities to processes. After assigning priorities, the OS
takes care of the rest by choosing the highest-priority ready process. There are two major
ways to assign priorities: static priorities that do not change during execution and dynamic
priorities that do change. We will look at examples of each in this section.
Rate-Monotonic Scheduling
Rate-monotonic scheduling(RMS), introduced by Liu and Lay land[Liu73], was one
of the first scheduling policies developed for real-time systems and is still very widely used.
RMS is a static scheduling policy. It turns out that these fixed priorities are sufficient to
efficiently schedule the processes in many situations.
The theory underlying RMS is known as rate-monotonic analysis(RMA). This theory, as
summarized below, uses a relatively simple model of the system.
All processes run periodically on a single CPU.
■Context switching time is ignored.
■There are no data dependencies between processes.
■The execution time for a process is constant.
■All deadlines are at the ends of their periods.
■The highest-priority ready process is always selected for execution.
The major result of RMAs that a relatively simple scheduling policy is optimal under certain
conditions. Priorities are assigned by rank order of period, with the process with the shortest
period being assigned the highest priority. This fixed-priority scheduling policy is the
optimum assignment of static priorities to processes, in that it provides the highest CPU
utilization while ensuring that all processes meet their deadlines. Example6.3 illustrates
RMS.
If, on the other hand, we give higher priority to P2, the n critical-instant analysis tells us that
we must execute all of P2 and all of P1 in one of P1’s periods in the worst case:
T1 T2 1.
There are cases where the first relationship can be satisfied and the second cannot, but there
are no cases where the second relationship can be satisfied and the first cannot. We can
inductively show that the process with the shorter period should always be given higher
priority for process sets of arbitrary size. It is also possible to prove that RMS always
provides a feasible schedule if such a schedule exists.
The bad news is that, although RMS is the optimal static-priority schedule, it does not always
allow the system to use 100% of the available CPU cycles. In the RMS frame work, the total
CPU utilization for a set of n tasks is the fraction Ti/I is the fraction of time that the CPU
spends executing task i.
It is possible to show that for a set of two tasks under RMS scheduling, the CPU utilization U
will be no greater than 2(21/21)∼0.83. In other words, the CPU will be idle at least 17% of
the time. This idle time is due to the fact that priorities are assigned statically; we see in the
next section that more aggressive scheduling.
Earliest-Deadline-First Scheduling
Earliest deadline first(EDF) is another well-known scheduling policy that was also
studied by Liu and Lay land[Liu73]. It is a dynamic priority scheme—it changes process
priorities during execution based on initiation times. As a result, it can achieve higher CPU
utilizations than RMS. The EDF policy is also very simple: It assigns priorities in order of
deadline. The highest-priority process is the one whose deadline is nearest in time, and the
lowest- priority process is the one whose deadline is farthest away. Clearly, priorities must be
recalculated at every completion of a process. However, the final step of the OS during the
scheduling procedure is the same as for RMS— the highest-priority ready process is chosen
for execution.
Example6.4 illustrates EDF scheduling in practice. In some applications, it may be acceptable
for some processes to occasionally miss deadlines. For example, a set-top box for video
decoding is not a safety-critical application, and the occasional display artifacts caused by
missing deadlines may be acceptable in some markets. What if your set of processes is un
schedulable and you need to guarantee that they complete their deadlines? There are several
possible ways to solve this problem:
■Get a faster CPU. That will reduce execution times with out changing the periods, giving
you lower utilization. This will require you to redesign the hardware, but this is often feasible
because you are rarely using the fastest CPU available.
■ Redesign the processes to take less execution time. This requires knowledge of the code
and may or may not be possible.
■ Rewrite the specification to change the deadlines. This is un likely to be feasible, but may
be in a few cases where some of the deadlines were initially made tighter than necessary.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy