0% found this document useful (0 votes)
5 views26 pages

ES Unit 5

The document discusses the integrated development environment (IDE) for embedded system development, highlighting its key components such as cross-compilers, linkers, and debuggers. It explains the evolution of programming languages from machine code to high-level languages and the role of preprocessors and compilers in generating machine code. Additionally, it covers the importance of decompilers in transforming binary code into a more readable format, improving the efficiency of code analysis and debugging in complex applications.

Uploaded by

Venkata Ramana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views26 pages

ES Unit 5

The document discusses the integrated development environment (IDE) for embedded system development, highlighting its key components such as cross-compilers, linkers, and debuggers. It explains the evolution of programming languages from machine code to high-level languages and the role of preprocessors and compilers in generating machine code. Additionally, it covers the importance of decompilers in transforming binary code into a more readable format, improving the efficiency of code analysis and debugging in complex applications.

Uploaded by

Venkata Ramana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

UNIT 5

EMBEDDED SYSTEM DEVELOPMENT


The integrated development environment

 The most important characteristic of E.S is the cross-platform development


technique.
 The primary components in the development environment are the host system, the
target system and many connectivity solutions between the host and the target
E.S.
 The development tools offered by the host system are the cross complier, linker
and source-level debugger.
 The target embedded system offers a dynamic loader, link loader, a monitor and a
debug agent.
 Set of connections are required between the source computer and the target
system.
 These connections can be used for transmitting debugger information between the
host debugger and the target debug agent.
IDE consists of:

 Text Editor or Source code editor


 A compiler and an interpreter
 Build automation tools
 Debugger
 Simulators
 Emulators and logic analyzer
The example of IDE is Turbo C/C++ which provides platform on windows for
development of application programs with command line interface. The other category of
IDE is known as Visual IDE which provides the platform for visual development
environment, ex- Microsoft Visual C++.
IDEs used in embedded firmware are slightly different from the generic IDE used for
high level language based development for desktop applications.

In Embedded applications the IDE is either supplied by the target processor/controller


manufacturer or by third party vendors or as Open source.

Types of files generated on cross compilation

The hardware components within an embedded system can only directly transmit, store,
and execute machine code, a basic language consisting of ones and zeros. Machine code
was used in earlier days to program computer systems, which made creating any complex
application a long and tedious ordeal. In order to make programming more efficient,
machine code was made visible to programmers through the creation of a hardware-
specific set of instructions, where each instruction corresponded to one or more machine
code operations. These hardware- specific sets of instructions were referred to as
assembly language.

Over time, other programming languages, such as C, C++, Java, etc., evolved with
instruction sets that were (among other things) more hardware-independent. These are
commonly referred to as high level languages because they are semantically further away
from machine code, they more resemble human languages, and are typically independent
of the hardware. This is in contrast to a low-level language, such as assembly language,
which more closely resembles machine code. Unlike high-level languages, low-level
languages are hardware dependent, meaning there is a unique instruction set for
processors with different architectures. Table outlines this evolution of programming
languages. Because machine code is the only language the hardware can directly execute,
all other languages need some type of mechanism to generate the corresponding machine
code. This mechanism usually includes one or some combination of preprocessing,
translation, and interpretation. Depending on the language, these mechanisms exist on the
programmer‘s host system (typically a non embedded development system, such as a PC
or Sparc station), or the target system (the embedded system being
developed)..Preprocessing is an optional step that occurs before either the translation or
interpretation of source code, and whose functionality is commonly implemented by a
preprocessor.

The preprocessor‘s role is to organize and restructure the source code to make translation
or interpretation of this code easier. As an example, in languages like C and C++, it is a
preprocessor that allows the use of named code fragments, such as macros, that simplify
code development by allowing the use of the macro‘s name in the code to replace
fragments of code. The preprocessor then replaces the macro name with the contents of
the macro during preprocessing.

The preprocessor can exist as a separate entity, or can be integrated within the translation
or interpretation unit. Many languages convert source code, either directly or after having
been preprocessed through use of a compiler, a program that generates a particular target
language—such as machine code and Java byte code—from the source language. A
compiler typically ―translates‖ all of the source code to some target code at one time. As
is usually the case in embedded systems, compilers are located on the programmer‘s host
machine and generate target code for hardware platforms that differ from the platform the
compiler is actually running on. These compilers are commonly referred to as cross-
compilers.

In the case of assembly language, the compiler is simply a specialized cross-compiler


referred to as an assembler, and it always generates machine code. Other high-level
language compilers are commonly referred to by the language name plus the term
―compiler,‖ such as ―Java compiler‖ and ―C compiler.‖ High-level language compilers
vary widely in terms of what is generated. Some generate machine code, while others
generate other high-level code ,which then requires what is produced to be run through at
least one more compiler or interpreter, as discussed later in this section. Other compilers
generate assembly code, which then must be run through an assembler. After all the
compilation on the programmer‘s host machine is completed, the remaining target code
file is commonly referred to as an object file, and can contain anything from machine
code to Java byte code (discussed later in this section), depending on the programming
language used. As shown in Figure after linking this object file to any system libraries
required, the object file, commonly referred to as an executable, is then ready to be
transferred to the target embedded system‘s memory.

DeAssember

A decompiler represents executable binary files in a readable form. More precisely, it


transforms binary code into text that software developers can read and modify. The
software security industry relies on this transformation to analyze and validate programs.
The analysis is performed on the binary code because the source code (the text form of
the software) traditionally is not available, because it is considered a commercial secret.
Programs to transform binary code into text form have always existed. Simple one-to-one
mapping of processor instruction codes into instruction mnemonics is performed by
disassemblers. Many disassemblers are available on the market, both free and
commercial. The most powerful disassembler is IDA Pro, published by Datarescue. It can
handle binary code for a huge number of processors and has open architecture that allows
developers to write add-on analytic modules. Decompilers are different from
disassemblers in one very important aspect. While both generate human readable text,
decompilers generate much higher level text, which is more concise and much easier to
read. Disassembler output Compared to low level assembly language, high level language
representation has several advantages:

 It is concise.
 It is structured
 It doesn't require developers to know the assembly language.
 It recognizes and converts low level idioms into high level notions.
 It is less confusing and therefore easier to understand.
 It is less repetitive and less distracting.
 It uses data flow analysis.
Usually the decompiler's output is five to ten times shorter than the disassembler's
output. For example, a typical modern program contains from 400KB to 5MB of
binary code. The disassembler's output for such a program will include around 5-
100MB of text, which can take anything from several weeks to several months to
analyze completely. Analysts cannot spend this much time on a single program for
economic reasons. The decompiler's output for a typical program will be from 400KB
to 10MB. Although this is still a big volume to read and understand (about the size of
a thick book), the time needed for analysis time is divided by 10 or more. The second
big difference is that the decompiler output is structured. Instead of a linear flow of
instructions where each line is similar to all the others, the text is indented to make
the program logic explicit. Control flow constructs such as conditional statements,
loops, and switches are marked with the appropriate keywords.The decompiler's
output is easier to understand than the disassembler's output because it is high level.
To be able to use a disassembler, an analyst must know the target processor's
assembly language. Mainstream programmers do not use assembly languages for
everyday tasks, but virtually everyone uses high level languages today. Decompilers
remove the gap between the typical programming languages and the output language.
More analysts can use a decompiler than a disassembler. Decompiler
outputDecompilers convert assembly level idioms into high-level abstractions. Some
idioms can be quite long and time consuming to analyze. The following one line code
x = y / 2; can be transformed by the compiler into a series of 20-30 processor
instructions. It takes at least 15-30 seconds for an experienced analyst to recognize
the pattern and mentally replace it with the original line.. If the code includes many
such idioms, an analyst is forced to take notes and mark each pattern with its short
representation. All this slows down the analysis tremendously.
Decompilers remove this burden from the analysts. The amount of assembler
instructions to analyze is huge. They look very similar to each other and their patterns
are very repetitive. Reading disassembler output is nothing like reading a captivating
story. In a compiler generated program 95% of the code will be really boring to read
and analyze. It is extremely easy for an analyst to confuse two similar looking
snippets of code, and simply lose his way in the output. These two factors (the size
and the boring nature of the text) lead to the following phenomenon: binary programs
are never fully analyzed. Analysts try to locate suspicious parts by using some
heuristics and some automation tools. Exceptions happen when the program is
extremely small or an analyst devotes a disproportionally huge amount of time to the
analysis. Decompilers alleviate both problems: their output is shorter and less
repetitive. The output still contains some repetition, but it is manageable by a human
being. Besides, this repetition can be addressed by automating the analysis. Repetitive
patterns in the binary code call for a solution. One obvious solution is to employ the
computer to find patterns and somehow reduce them into something shorter and
easier for human analysts to grasp. Some disassemblers (including IDA Pro) provide
a means to automate analysis. However, the number of available analytical modules
stays low, so repetitive code continues to be a problem. The main reason is that
recognizing binary patterns is a surprisingly difficult task. Any ―simple‖ action,
including basic arithmetic operations such as addition and subtraction, can be
represented in an endless number of ways in binary form. The compiler might use the
addition operator for subtraction and vice versa. It can store constant numbers
somewhere in its memory and load them when needed. It can use the fact that, after
some operations, the register value can be proven to be a known constant, and just use
the register without reinitializing it. The diversity of methods used explains the small
number of available analytical modules.

Decompilers

Decompilers remove the gap between the typical programming languages and the
output language. More analysts can use a decompiler than a disassembler.
Decompilers convert assembly level idioms into high-level abstractions. Some idioms
can be quite long and time consuming to analyze. The following one line code x = y /
2; can be transformed by the compiler into a series of 20-30 processor instructions. It
takes at least 15-30 seconds for an experienced analyst to recognize the pattern and
mentally replace it with the original line.. If the code includes many such idioms, an
analyst is forced to take notes and mark each pattern with its short representation. All
this slows down the analysis tremendously. Decompilers remove this burden from the
analysts. The amount of assembler instructions to analyze is huge. They look very
similar to each other and their patterns are very repetitive. Reading disassembler
output is nothing like reading a captivating story. In a compiler generated program
95% of the code will be really boring to read and analyze. It is extremely easy for an
analyst to confuse two similar looking snippets of code, and simply lose his way in
the output. These two factors (the size and the boring nature of the text) lead to the
following phenomenon: binary programs are never fully analyzed. Analysts try to
locate suspicious parts by using some heuristics and some automation tools.
Exceptions happen when the program is extremely small or an analyst devotes a
disproportionally huge amount of time to the analysis. Decompilers alleviate both
problems: their output is shorter and less repetitive. The output still contains some
repetition, but it is manageable by a human being. Besides, this repetition can be
addressed by automating the analysis. Repetitive patterns in the binary code call for a
solution. One obvious solution is to employ the computer to find patterns and
somehow reduce them into something shorter and easier for human analysts to grasp.
Some disassemblers (including IDA Pro) provide a means to automate analysis.
However, the number of available analytical modules stays low, so repetitive code
continues to be a problem. The main reason is that recognizing binary patterns is a
surprisingly difficult task. Any ―simple‖ action, including basic arithmetic
operations such as addition and subtraction, can be represented in an endless number
of ways in binary form. The compiler might use the addition operator for subtraction
and vice versa. It can store constant numbers somewhere in its memory and load them
when needed. It can use the fact that, after some operations, the register value can be
proven to be a known constant, and just use the register without reinitializing it. The
diversity of methods used explains the small number of available analytical modules.
The situation is different with a decompiler. Automation becomes much easier
because the decompiler provides the analyst with high level notions. Many patterns
are automatically recognized and replaced with abstract notions. The remaining
patterns can be detected easily because of the formalisms the decompiler introduces.
For example, the notions of function parameters and calling conventions are strictly
formalized. Decompilers make it extremely easy to find the parameters of any
function call, even if those parameters are initialized far away from the call
instruction. With a disassembler, this is a daunting task, which requires handling each
case individually. Decompilers, in contrast with disassemblers, perform extensive
data flow analysis on the input. This means that questions such as, ―Where is the
variable initialized?‖ and, ―Is this variable used?‖ can be answered immediately,
without doing any extensive search over the function. Analysts routinely pose and
answer these questions, and having the answers immediately increases their
productivity. Two reasons: 1) they are tough to build because decompilation theory is
in its infancy; and 2) decompilers have to make many assumptions about the input
file, and some of these assumptions may be wrong. Wrong assumptions lead to
incorrect output. In order to be practically useful, decompilers must have a means to
remove incorrect assumptions and be interactive in general. Building interactive
applications is more difficult than building offline (batch) applications. In short, these
two obstacles make creating a decompiler a difficult endeavor both in theory and in
practice.Given all the above, we are proud to present our analytical tool, the Hex-
Rays Decompiler. It embodies almost 10 years of proprieary research and implements
many new approaches to the problems discussed above. The highlights of our
decompiler are:

 It can handle real world applications


 It has both automatic (batch) and interactive modes.
 It is compiler-agnostic to the maximum possible degree.
 Its core does not depend on the processor.
 It has a type system powerful enough to express any C type.
 It has been tested on thousands of files including huge applications consisting
of tens of Mbs.
 It is interactive: analysts may change the output, rename the variables and
specify their type.
 It is fast: it decompiles a typical function in under a second
Simulators, Emulators and Debugging

Leading-edge silicon manufacturing processes make it possible to integrate hundreds of


millions of transistors on a single digital signal processor (DSP), operating at frequencies
measuring in hundreds of megahertz (MHz). However, the vast performance unlocked by
these integration levels and speeds comes with a price in visibility and access. Larger,
faster DSPs with greater integration levels and large on-chip cache make it difficult for
developers to see what is happening inside the chip during operation and to manipulate
inputs for testing. Many of today´s applications are growing to hundreds of thousands of
lines of code. In fact, some developers are reaching one million lines of code today.
Development teams are focused on shrinking the time to market window by refocusing
their attention on increasing the efficiency of the development flow, re-using proven
software modules and algorithms and improving the performance of their application to
take advantage of every MIP on their processor. The lack of visibility and access into the
code make conventional methods of system debugging much more difficult, slowing
down development and costing more in resources before a product is released to market.
In fact, studies have demonstrated that bugs and bottlenecks found late in the design
cycle are harder and more expensive to isolate and fix and can be the reason for a product
missing a critical market window. Both simulators and emulators are available to give
developers increased visibility into their code and system performance, even with the
most advanced DSPs and microprocessors.

Simulation versus emulation

The roles of simulation and emulation in the development of DSP-based designs can be
confusing, since at a glance they perform similar functions. In simplest terms, the main
difference between simulation and emulation is that simulation is done all in software and
emulation is done in hardware. Probe deeper, however, and the unique characteristics and
compelling benefits of each tool are clear. Together they complement each other to
deliver benefits that either one alone cannot provide.
Traditionally, the work of simulation begins in the very first stages of design, where the
designer uses it to evaluate initial code. Developers use simulators to model the
architecture of complex multi-core systems early in the design process, typically months
before hardware is available. This makes it possible to evaluate various design
configurations without the need for prototype devices. In addition, the simulation
software collects massive amounts of data as the designer runs their core code and makes
different variations to it. The simulation software also makes it possible to determine the
most efficient code to use in the application´s design by representing the performance of
the DSP and any peripherals, which affect the performance of the code.

However, in the past, the slowness of the simulators prevented them from being used
extensively. To be effective, simulators must be fast to allow for the massive data
collection needed for complex DSP applications. As a result of the slow simulators,
designers resort instead to conducting tuning and analysis later in the development cycle
when hardware prototypes are available – a process that results in considerable time and
cost penalties. With the introduction of fast simulation technology and data collection
tools, developers can gather huge amounts of data in minutes instead of the hours
previous or competitive simulators required. Simulators are an important tool in the
design and debug process because they can run a simulation identically over and over,
which hardware-based evaluations cannot achieve because of changes caused by external
events, such as interrupts. They are also extremely flexible and provide insight into the
CPU alone or can be used to model a full system. They also can be easily rescaled and
integrated with different memories and peripherals. Since designers are modeling the
hardware they can actually build things into the model that allow them to extract a lot
more data enabling some of the advanced analysis capabilities.

Target hardware debugging

 During development process, a host system is used


 Then locating and burning the codes in the target board.

 Target board hardware and software later copied to get the final embedded system
 Final system function exactly as the one tested and debugged and finalized during
the development process
Host system at PC or workstation or laptop
 High performance processor with caches, large RAM memory

 ROMBIOS (read only memory basic input-output system)

 very large memory on disk

 keyboard

 display monitor

 mice

 network connection

 Program development kit for a high

 level language program or IDE

 Host processor compiler and cross

 compiler

 Cross assembler

Target and final systems


 Target system differs from a final system
 Target system interfaces with the computer as well works as a standalone system
 In target system might be repeated downloading of the codes during the development
phase.
Boundary scan
Board-Level Boundary Scan Testing has been around for several years, typically in a lab
with a PCbased tester and in manufacturing with ATEs. One main advantage of boundary
scan tests is that they are structural in nature and have explicit coverage metrics. Also,
using structural design information such as netlists and device pin attributes, test
generation tools automatically generate most tests. All this leads to higher quality
assurance and reduced test generation time. In an embedded environment, board-level
tests are typically functional, targeting a specific system operation or particular hardware
modules within that system. These tests‘ circuit coverage is often difficult to calculate.
Moreover, such tests are typically handwritten, specialized cases designed to give a
go/no-go status for one of the circuit‘s functional blocks. Because boundary scan tests are
vector based, their application is independent of vector content. This decouples the test
data from the control software used to apply the data. Thus, to apply any boundary scan
test, engineers only need to write one software module. Using boundary scan in an
embedded environment reduces the software development effort necessary to test the unit
under test (UUT). Moreover, most manufacturing tests are reusable with little or no
modification, thus reducing testing cost for the embedded environment. Boundary scan
will never eliminate the need for functional test cases, but the test engineers writing such
tests can focus on specific design areas that boundary scan tests do not explicitly cover.
The coverage metrics that test generation tools provide for boundary scan tests clearly
define these areas. Besides testing, boundary scan can also support in-system
configuration. ISC provides the ability to update the contents of programmable logic
devices (PLDs) and programmable memories. For many years, test engineers have used
boundary scan during production to program these types of devices using external
programming systems. Hence, it is natural to migrate boundary scan‘s programmability
features to embedded systems as well. This capability can help reduce warranty costs for
such devices‘field updates.
System test architectures: Two primary architectures support embedded boundary scan
testing. The first is the serialized chain. This architecture connects all of the system‘s
boundary scan chains as one serialized scan chain. The second, more flexible architecture
is the multidrop configuration,1 which Figure 1 shows. This architecture buses the
boundary scan chain to each board and uses special logic to connect a board to the test
bus at appropriate times. At Lucent Technologies, we use an extended IEEE Std. 1149.1
(Standard Test Access Port and Boundary- Scan Architecture) multidrop architecture for
complex systems. This architecture is based on the Texas Instruments addressable scan
port.2 ASP lets us reuse tests for a system‘s common circuit boards more easily than with
other technologies because the selection protocol is separate from the test data. With
either the serialized-chain or the multidrop architecture, the system must have an IEEE
test access port (TAP) controller. Multidrop systems require independent development of
board tests for each board, using commercial tools for board test generation. Test
operators then apply these tests to the board when the system selects that board. During
board test development, engineers apply external tests to the board with the ASP in pass-
through mode.

Embedded software development process and tools


Application programs are typically developed, compiled, and run on host system
Embedded programs are targeted to a target processor (different from the
development/host processor and operating environment) that drives a device or controls
What tools are needed to develop, test, and locate embedded software into the target
processor and its operating environment? Host: Where the embedded software is
developed, compiled, tested, debugged, optimized, and prior to its translation into target
device. (Because the host has keyboards, editors, monitors, printers, more memory, etc.
for development, while the target may have not of these capabilities for developing the
software.) Target: After development, the code is cross-compiled, translated – cross-
assembled, linked (into target processor instruction set) and located into the target
Cross-Compilers – Native tools are good for host, but to port/locate embedded code to
target, the host must have a tool-chain that includes a cross-compiler, one which runs on
the host but produces code for the target processor Cross-compiling doesn‘t guarantee
correct target code due to (e.g., differences in word sizes, instruction sizes, variable
declarations, library functions) Cross-Assemblers and Tool Chain Host uses cross-
assembler to assemble code in target‘s instruction syntax for the target Tool chain is a
collection of compatible, translation tools, which are ‗pipelined‘ to produce a complete
binary/machine code that can be linked and located into the target processor
EMBEDDED SOFTWARE DEVELOPMENT TOOLS
Linker/Locators for Embedded Software Native linkers are different from cross-linkers
(or locators) that perform additional tasks to locate embedded binary code into target
processors Address Resolution – Native Linker: produces host machine code on the hard-
drive (in a named file), which the loader loads into RAM, and then schedules (under the
OS control) the program to go to the CPU. In RAM, the application program/code‘s
logical addresses for, e.g., variable/operands and function calls, are ordered or organized
by the linker. The loader then maps the logical addresses into physical addresses – a
process called address resolution. The loader then loads the code accordingly into RAM .
In the process the loader also resolves the addresses for calls to the native OS routines
Locator: produces target machine code (which the locator glues into the RTOS) and the
combined code (called map) gets copied into the target ROM. The locator doesn‘t stay in
the target environment, hence all addresses are resolved, guided by locating-tools and
directives, prior to running the code
EMBEDDED SOFTWARE DEVELOPMENT TOOLS
Locating Program Components – Segments Unchanging embedded program (binary
code) and constants must be kept in ROM to be remembered even on power-off
Changing program segments (e.g., variables) must be kept in RAM Chain tools separate
program parts using segments concept Locating Program Components – Segments
Unchanging embedded program (binary code) and constants must be kept in ROM to be
remembered even on power-off Changing program segments (e.g., variables) must be
kept in RAM Chain tools separate program parts using segments concept Chain tools
(for embedded systems) also require a ‗start-up‘ code to be in a separate segment and
‗located‘ at a microprocessor-defined location where the program starts execution Some
cross-compilers have default or allow programmer to specify segments for program parts,
but cross-assemblers have no default behavior and programmer must specify segments
for program parts Telling/directing the locator where (which segments) to place parts The
–Z tells which segments (list of segments) to use and the start-address of the first segment
The first line tells which segments to use for the code parts, starting at address 0; and the
second line tells which segments to use for the data parts, starting at x8000 The proper
names and address info for the directing the locator are usually in the cross-compiler
documentation Other directives: range of RAM and ROM addresses, end of stack address
(segment is placed below this address for stack to grow towards the end) Segments/parts
can also be grouped, and the group is located as a unit Initialized Data and Constant
Strings Segments with initialized values in ROM are shadowed (or copied into RAM) for
correct reset of initialized variables, in RAM, each time the system comes up (esp. for
initial values that are take #define constants, and which can be changed) In C programs, a
host compiler may set all uninitialized variable to zero or null, but this is not generally
the case for embedded software cross-compilers (unless the startup code in ROM does so
If part(s) of a constant string is(are) expected to be changed during run-time, the cross-
compiler must generate a code to allow ‗shadowing‘ of the string from ROM Output file
of locators are Maps – list addresses of all segments Maps are useful for debugging n
advanced‘ locator is capable of running (albeit slowly) a startup code in ROM, which
(could decompress and) load the embedded code from ROM into RAM to execute
quickly since RAM is faster, especially for RISC microprocessors
EMBEDDED SYSTEM IMPLEMENTATION AND
TESTING
The Main Software Utility Tool: Writing Code in an Editor or IDE

Source code is typically written with a tool such as a standard ASCII text editor, or an
Integrated Development Environment (IDE) located on the host (development) platform,
as shown in Figure. An IDE is a collection of tools, including an ASCII text editor,
integrated into one application user interface. While any ASCII text editor can be used to
write any type of code, independent of language and platform, an IDE is specific to the
platform and is typically provided by the IDE‘s vendor, a hardware manufacturer (in a
starter kit that bundles the hardware board with tools such as an IDE or text editor), OS
vendor, or language vendor (Java, C, etc.).

Computer-Aided Design (CAD) and the Hardware:

Computer-Aided Design (CAD) tools are commonly used by hardware engineers to


simulate circuits at the electrical level in order to study a circuit‘s behavior under various
conditions before they actually build the circuit.
Figure is a snapshot of a popular standard circuit simulator, called PSpice. This circuit
simulation software is a variation of another circuit simulator that was originally
developed at University of California, Berkeley called SPICE (Simulation Program with
Integrated Circuit Emphasis). PSpice is the PC version of SPICE, and is an example of a
simulator that can do several types of circuit analysis, such as nonlinear transient,
nonlinear dc, linear ac, noise, and distortion to name a few. As shown in Figure circuits
created in this simulator can be made up of a variety of active and/or passive elements.
Many commercially available electrical circuit simulator tools are generally similar to
PSpice in terms of their overall purpose, and mainly differ in what analysis can be done,
what circuit components can be simulated, or the look and feel of the user interface of the
tool. Because of the importance of and costs associated with designing hardware, there
are many industry techniques in which CAD tools are utilized to simulate a circuit. Given
a complex set of circuits in a processor or on a board, it is very difficult, if not
impossible, to perform a simulation on the whole design, so a hierarchy of simulators and
models are typically used. In fact, the use of models is one of the most critical factors in
hardware design, regardless of the efficiency or accuracy of the simulator. At the highest
level, a behavioral model of the entire circuit is created for both analog and digital
circuits, and is used to study the behavior of the entire circuit. This behavioral model can
be created with a CAD tool that offers this feature, or can be written in a standard
programming language. Then depending on the type and the makeup of the circuit,
additional models are created down to the individual active and passive components of
the circuit, as well as for any environmental dependencies (temperature, for example) that
the circuit may have. Aside from using some particular method for writing the circuit
equations for a specific simulator, such as the tableau approach or modified nodal
method, there are simulating techniques for handling complex circuits that include one or
some combination of dividing more complex circuits into smaller circuits, and then
combining the results. utilizing special characteristics of certain types of circuits.
Utilizing vector-high speed and/or parallel computers.

Translation Tools—Preprocessors, Interpreters, Compilers, and Linkers

Translating code was along with a brief introduction to some of the tools used in
translating code, including preprocessors, interpreters, compilers, and linkers. As a
review, after the source code has been written, it needs to be translated into machine
code, since machine code is the only language the hardware can directly execute. All
other languages need development tools that generate the corresponding machine code
the hardware will understand. This mechanism usually includes one or some combination
of preprocessing, translation, and/or interpretation machine code generation techniques.
These mechanisms are implemented within a wide variety of translating development
tools. Preprocessing is an optional step that occurs either before the translation or
interpretation of source code, and whose functionality is commonly implemented by a
preprocessor. The preprocessor‘s role is to organize and restructure the source code to
make translation or interpretation of this code easier. The preprocessor can be a separate
entity, or can be integrated within the translation or interpretation unit.

Many languages convert source code, either directly or after having been preprocessed, to
target code through the use of a compiler, a program which generates some target
language, such as machine code, Java byte code, etc., from the source language, such as
assembly, C, Java, etc

A compiler typically translates all of the source code to a target code at one time. As is
usually the case in embedded systems, most compilers are located on the programmer‘s
host machine and generate target code for hardware platforms that differ from the
platform the compiler is actually running on. These compilers are commonly referred to
as cross-compilers. In the case of assembly, an assembly compiler is a specialized cross-
compiler referred to as an assembler, and will always generate machine code. Other high-
level language compilers are commonly referred to by the language name plus
―compiler‖ (i.e., Java compiler, C compiler). High-level language compilers can vary
widely in terms of what is generated. Some generate machine code while others generate
other high-level languages, which then require what is produced to be run through at least
one more compiler. Still other compilers generate assembly code, which then must be run
through an assembler. After all the compilation on the programmer‘s host machine is
completed, the remaining target code file is commonly referred to as an object file, and
can contain anything from machine code to Java byte code, depending on the
programming language used. As shown in Figure , a linker integrates this object file with
any other required system libraries, creating what is commonly referred to as an
executable binary file, either directly onto the board‘s memory or ready to be transferred
to the target embedded system‘s memory by a loader.

Debugging Tools

Aside from creating the architecture, debugging code is probably the most difficult task
of the development cycle. Debugging is primarily the task of locating and fixing errors
within the system. This task is made simpler when the programmer is familiar with the
various types of debugging tools available and how they can be used (the type of
information shown in Table). As seen from some of the descriptions in Table, debugging
tools reside and interconnect in some combination of standalone devices, on the host,
and/or on the target board.
Some of these tools are active debugging tools and are intrusive to the running of the
embedded system, while other debug tools passively capture the operation of the system
with no intrusion as the system is running. Debugging an embedded system usually
requires a combination of these tools in order to address all of the different types of
problems that can arise during the development process.
Quality Assurance and Testing of the Design

Among the goals of testing and assuring the quality of a system are finding bugs within a
design and tracking whether the bugs are fixed. Quality assurance and testing is similar to
debugging, discussed earlier in this chapter, except that the goals of debugging are to
actually fix discovered bugs. Another main difference between debugging and testing the
system is that debugging typically occurs when the developer encounters a problem in
trying to complete a portion of the design, and then typically tests-to-pass the bug fix
(meaning tests only to ensure the system minimally works under normal circumstances).
With testing, on the other hand, bugs are discovered as a result of trying to break the
system, including both testing-to-pass and testing-to-fail, where weaknesses in the system
are probed. Under testing, bugs usually stem from either the system not adhering to the
architectural specifications— i.e., behaving in a way it shouldn‘t according to
documentation, not behaving in a way it should according to the documentation,
behaving in a way not mentioned in documentation— or the inability to test the system.
The types of bugs encountered in testing depend on the type of testing being done. In
general, testing techniques fall under one of four models: static black box testing, static
white box testing, dynamic black box testing, or dynamic white box testing (see the matrix
in Figure 12-9). Black box testing occurs with a tester that has no visibility into the
internal workings of the system (no schematics, no source code, etc.). Black box testing is
based on general product requirements documentation, as opposed to white box testing
(also referred to clear box or glass box testing) in which the tester has access to source
code, schematics, and so on. Static testing is done while the system is not running,
whereas dynamic testing is done when the system is running.

Within each of the models, testing can be further broken down to include unit/module
testing (incremental testing of individual elements within the system), compatibility
testing (testing that the element doesn‘t cause problems with other elements in the
system), integration testing (incremental testing of integrated elements), system testing
(testing the entire embedded system with all elements integrated), regression testing
(rerunning previously passed tests after system modification), and manufacturing testing
(testing to ensure that manufacturing of system didn‘t introduce bugs), just to name a
few. From these types of tests, an effective set of test cases can be derived that verify that
an element and/or system meets the architectural specifications, as well as validate that
the element and/or system meets the actual requirements, which may or may not have
been reflected correctly or at all in the documentation. Once the test cases have been
completed and the tests are run, how the results are handled can vary depending on the
organization, but typically vary between informal, where information is exchanged
without any specific process being followed, and formal design reviews, or peer reviews
where fellow developers exchange elements to test, walkthroughs where the responsible
engineer formally walks through the schematics and source code, inspections where
someone other than the responsible engineer does the walk through, and so on. Specific
testing methodologies and templates for test cases, as well as the entire testing process,
have been defined in several popular industry quality assurance and testing standards,
including ISO9000 Quality Assurance standards, Capability Maturity Model (CMM), and
the ANSI/IEEE 829 Preparation, Running, and Completion of Testing standards.
Embedded Hardware Tests

 Target Hardware Debugging


 Testing of Processor
 Testing of External Peripherals
 Testing of Memory and Flash Memory
 Processor- External Peripherals,
 Type of Memory- Memory Testing- Flash Memory
Embedded Software Tests

 Testing codes for the GUIs, and HCIs Testing the codes for the tasks
 Testing of codes for decision blocks in the tasks
 Testing of codes for the loops
 Testing of codes for display
 Testing of codes for communication to other computing systems
Testing Steps at Host Machine

1. Initial Tests─ each module or segment at initial stage itself and on host itself

2. Test data─ all possible combinations of data designed and taken as test data

3. Exception condition tests ─ all possible exceptions for the test

4. Tests-1: hardware independent code

5. Tests-2:scaffold software, software running on host the target dependent codes and
which have same start of code and port and device addresses as at the hardware.
Instructions– given from file or keyboard inputs. Outputs–at host’s LCD display and
save at file

6. Test Interrupt Service routines hardware independent part– sections of interrupt


service routines are called, which are hardware independent and tested

7. Test Interrupt Service routines hardware dependent part


8. Timer tests─ Hardware dependent code timing functions, clock tick set, counts get,
counts put, delay

9. Assert-Macro tests─ insert the codes in the program that check whether a condition or
a parameter actually turns true or false. If it turns false─ the program stops. Use the
assert macro at different critical places in the application program

Laboratory Tools

Hardware Diagnostic Laboratory Tools

Volt-Ohm meter─

Useful for checking the power supply voltage at source and voltage levels at chips power
input pins, and port pins initial at start and final voltage levels after the software run,
checking broken connections, improper ground connections, and burnout resistances and
diodes.

 Logic Probe
 Oscilloscope
 Logic Analyser
 Bit Rate meter
Use of Logic Probe

 Simplest hardware test device


 Handheld pen like device with LEDs – Glows green for ‘1’ and red for ‘0’
 Important tool when studying the long port-delay effects (>1s).
 Delay program tests the presence of system clock ticks
Uses of Oscilloscope

 Screen to display two signal voltages as a function of time


 Displays analog as well as digital signals as a function of time
 Noise detection tool
 Mal-function detection of a sudden transitions between '0' and '1' states during a
period.
Uses of Logic Analyser

 A powerful hardware tool for checking multiple lines carrying address, data and
control bits, IO buses, ports, peripherals and clock
 Recognizes only discrete voltage conditions, '1' and '0'.
 Collects, stores and tracks multiple signals and bus transactions simultaneously
and successively.
 Reads multiple input lines (24 or 48) and later displays, using this tool, each
transaction on each of these on computer monitor (screen) observed
Use of Bit rate meter

 A measuring device that finds numbers of ‘1’s and ‘0’ in the preselected time
spans. • Measures throughput.
 can estimate bits ‘1’s and ‘0’s in a test message and then use bit rate meter to
find whether that matches with the message.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy