Design and Verification of Digital Systems
Design and Verification of Digital Systems
net/publication/227113750
CITATIONS READS
3 484
1 author:
Valeria Bertacco
University of Michigan
200 PUBLICATIONS 4,154 CITATIONS
SEE PROFILE
All content following this page was uploaded by Valeria Bertacco on 25 April 2015.
specifications
I Design
RTL I madubadder( opl, o w . sum, memaw.
mst.1, dk).
N u t I3l'Ol a m .
,
Register Transfer
I"P"f[31 01 W l . W2:
,"put m m w , meet-,, d*.
O I a*ornBg~dgerBeatj~
~ ~ Level
~ ~description
~ ~ ~
d('meet_l) sumi 3l'tmooo;
elm besl"
sum =apt + op2;
Verification wemow- lop113ll"aP2[31ll b
(op11311~~um1311).
end
endmodule !ladder
Gate Level
description
IC layout
Tech. mapping
Place & Route
/ Fabrication
Silicon die
ev
Testing and
Packaging
+
M a g e die
The next design phase consists of the Synthesis and Optimization of the
RTL design. The overall result of this phase is to generate a detailed model of
a circuit, which is optimized based on the design constraints. For instance, a
design could be optimized for power consumption or the size of its final real-
ization (IC area) or for the ease of testability of the final product. The detailed
model produced at this point describes the design in terms of its basic logic
components, such as AND, OR, NOT or XOR, in addition to memory elements.
Optimizing the netlist, or gate-level description, for constraints such as timing
and power requirements is an increasingly challenging aspect of current devel-
opments and it usually involves multiple iterations of trial-and-error attempts
before reaching a solution that satisfies the requirements. Such optimizations
may, in turn, introduce functional errors that require additional RTL verifica-
tion.
All the design phases, up to this point, have minimal support from Computer-
Aided Design (CAD) software tools and are almost entirely hand-crafted by the
design and verification team. Consequently, they absorb a preponderant frac-
tion of the time and cost involved in developing a digital system. Starting with
synthesis and optimization, most of the activities are semi-automatic or at least
heavily supported by CAD tools. Automating the RTL verification phase, is
the next challenge that the CAD industry is facing in providing full support for
digital systems development.
The synthesized model needs to be verified. The objective of RTL versus
gates ver$cation, or equivalence checking, is to guarantee that no errors have
been introduced during the synthesis phase. It is an automatic activity, re-
quiring minimal human interaction, that compares the pre-synthesis RTL de-
scription to the post-synthesis gate-level description in order to guarantee the
functional equivalence of the two models.
At this point, it is possible to proceed to technology mapping andplacement
and routing. The result is a description of the circuit in terms of geometrical
layout used for the fabrication process. Finally, the design is fabricated, and
the microchips are tested andpackaged.
This design flow is obviously a very ideal, conceptual case. For instance,
usually there are many iterations of synthesis, due to changes in the specifi-
cation or to the discovery of flaws during RTL verification. Each of the new
synthesized versions of the design needs to be put again through all of the
subsequent phases. One of the main challenges faced by design teams, for
instance, is satisfying the ever-increasing market pressure to produce digital
systems with better and better performance. These challenging specifications
force engineering teams to push the limits of their designs by optimizing them
at every level: architectural, component (optimizing library choice and sizing),
placement and routing. Achieving timing closure, that is, developing a design
that satisfies the timing constraints set in the specifications while still operat-
Design and Verification of Digital Systems 11
ing correctly and reliably, most often requires optimizations that go beyond the
abilities of automatic synthesis tools and pushes engineers to intervene man-
ually, at least in critical portions of the design. Often, it is only possible to
check if a design has met the specification requirements after the final layout
has been produced. If these requirements are not met, the engineering team
must devise alternative optimizations or architectural changes and create a new
design model that must be put through the complete design flow all over again.
is expected. The design of these tests is generally very time consuming, since
each of them has to be handcrafted by the verification engineering team. More-
over, their reusability is very limited because they are specific to each module.
Recently, a few CAD tools have become available to support functional val-
idation, meaning, they mainly provide more powerful and compact language
primitives to describe the test patterns and to check the outputs of the module,
thereby saving some test development time [HKMO 1, HMNO 1, Ber03al.
During chip-level validation, the design is verified as a whole. Often, this is
done after sufficient confidence is obtained regarding the correctness of each
single module. The focus is mainly in verifying the proper interaction between
modules. This phase, while more computationally intensive, has the advantage
of being carried on in a semi-automatic fashion. In fact, input test patterns
are often randomly generated, with the only constraint being that they must
be compatible with what the specification document defines to be the proper
input format for the design. During chip-level validation, it is usually possible
to use a golden model for verification. That is, run in parallel the simulation
of both the RTL and a high-level description of the design, and check that the
outputs of the two systems and the values stored in their memory elements
match one-to-one at the end of each clock cycle (this is called lock-step).
The quality of all these verification efforts is usually analytically evaluated
in terms of coverage: a measure of the fraction of the design that has been ver-
ified [KN96, LMUZ021. Functional validation can provide only partial cover-
age because of its approach. The objective therefore is to maximize coverage
for the design under test.
Various measures of coverage are in use: for instance line coverage counts
the lines of the RTL description that have been activated during simulation.
Another common metric is state coverage, which measures the number of all
the possible configurations of a design that have been simulated (i.e.validated).
This measure is particularly valuable when an estimate of the total-state space
of the design is available. In this situation the designer can use state coverage
to quantify the fraction of the design that has been verified.
With the increasing complexity of industrial designs, the fraction of design
space that the functional validation approach can explore is becoming vanish-
ingly small, indicating more and more that it is an inadequate solution to the
verification problem. Since only one state and one input combination of the
design under test are visited during each step of simulation, it is obvious that
neither of the above approaches can keep up with the exponential growth in
circuit complexity1.
l ~ h state
e space of a system doubles for each additional state bit added. Since, as we discussed earlier,
the area available doubles every 18 months, and assuming that a fixed fraction of this area is dedicated to
memory elements, the overall complexity growth is exponential.
Design and VeriJicationof Digital Systems 13
testbench
design design
module clockgen
initial
begin
clk = 0;
u
always
#5clk = -clk;
endmodule executions ?
output traces
By computing the cofactors w.r.t. (with respect to) c for the function used in
the previous example, we can easily find that F, = F, = a f b . Here is a formal
definition of support:
Definition 2.2. Let F : Bn + B denote a non-constant Boolean function of n
variables XI, - -.,xn. We say that F depends on xi zfFXi# Fxi. We call support
of F, indicated by S ( F ) ,the set ofBoolean variables F depends on. In the most
Design and VeriJicationof Digital Systems 15
For scalar functions the range reduces to R ( F ) = B for all except the two
constantfunctions 0 and 1.
A special class of functions that will be used frequently is that of charac-
teristic functions. Characteristic functions are scalar functions that represent
sets implicitly - they are asserted if and only if their input value belongs to the
set represented. Characteristic functions can be used, for instance, to describe
implicitly all the states of a system that have been explored by a symbolic
technique.
Definition 2.4. Given a set V c Bn, whose elements are Boolean vectors, its
characteristic function x v ( x ) : !I?"-+ B is defined as:
1 when x E V
0 otherwise
16 SCALABLE VERIFICATION WITH SYMBOLIC SIMULATION
Definition 2.5. Given twofunctions F (xl, - . ,xn) and X,('yl,.-.,yn, the func-
tion composition F O X ,is the function obtained by replacing the function X,
in F for each occurrence of the variable xi.
with a Boolean variable xi and has two out-edges labeled 0 and I . Each non-
+
sink node represents the Booleanfunction Zfi x i f i , where Fo and Fl are the
cofactors w.r:t. x, and are represented by the BDDs rooted at the 0 and I edges
respectively.
Moreovec a BDD satisfies two additional constraints:
I There is a complete (but otherwise arbitrary) ordering of the input vari-
ables. Every path from source to sink in the BDD visits the input variables
according to this ordering.
2 Each node represents a distinct logicfunction, that is, there is no duplicate
representation of the same function.
A common optimization in implementing BDDs is the use of complement
edges [BRBgO]. A complement edge indicates that the connected function is to
be interpreted as the complement of the ordinary function. When using com-
plement edges, BDDs have only one sink node "I", whereas the sink node "0"
is represented as the complement of "1". Boolean operations can be easily im-
plemented as graph algorithms on the BDD data structure by simple recursive
routines making Boolean function manipulation straightforward when using a
BDD representation.
Design and VeriJicationof Digital Systems 19
A critical aspect that contributes to the wide acceptance of BDDs for repre-
senting Boolean functions is that, in most applications, the amount of memory
required for BDDs remains manageable. The number of nodes that are part
of a BDD, also called the BDD size, is proportional to the amount of mem-
ory required, and thus the peak BDD size is a commonly used measure to
estimate the amount of memory required by a specific computation involving
Boolean expressions. However, the variable order chosen may affect the size
of a BDD. It has been shown that for some type of functions the size of a BDD
can vary from linear to exponential based on the variable order. Because of
its impact, much research has been devoted to finding algorithms that can pro-
vide a good variable order. While finding the optimal order is an intractable
problem, many heuristics have been suggested that find sufficiently good or-
ders, from static approaches based on the underlying logic network structure
in [MWBSV88, FFK881, to dynamic techniques that change the variable order
whenever the size of the BDD grows beyond a threshold [Rud93, BLW951.
Moreover, much research work has been dedicated to the investigation of al-
ternative representations to BDDs. For instance BMDs [BCOl] target the repre-
sentation of circuit with multiplicative cores, Zero-Suppressed BDDs [Mi11931
are suitable for the representation of sets, and MTBDDs [FMY97] can repre-
sent multi-valued functions. An example application which uses MTBDDs is
presented in Chapter 5.
Binary decision diagrams are used extensively in symbolic simulation. The
most critical drawback of this method is its high demand on memory resources,
which are mostly used for BDD representation and manipulation. This book
discusses recent techniques that transform the Boolean functions involved in
symbolic simulations through parametrization and approximation. The objec-
tive of parametrization is to generate new functions that have a more compact
BDD representation, while preserving the same results of the original sym-
bolic exploration. The reduced size of the BDDs involved translates to a lower
demand of memory resources, and thus it increases the size of IC designs that
can be effectively tackled by this formal verification approach.
inputs
reset
count
010 . . . . . . . . . . . . . . . . . , ..
value
Example 2.4. Figure 2.8 represents thejinite state machinefor a 3-bits counter
I-hot encoded. Notice that even fi the state is encoded using three bits, only
the three conjigurations 001,010,100 are possible for the circuit. Such con-
jigurations are said to be reachable from the Initial State. The remainingjve
conjiguration 000,O11,101,110,111 are said to be unreachable, since the cir-
cuit will never be in any of these states during normal operation.
The definition above is for a Mealy-type FSM. For a Moore-type FSM the
output function h simplifies to: h : S : Bn -+ 0 : !Bp.
24 SCALABLE VERIFICATION WITH SYMBOLIC SIMULATION
Example 2.5. The mathematical description of the FSM of Example 2.4 is the
.
following:
..
I = {count,reset),
0 = {XO,X I
,x2),
S={001,010,100),
While the state diagram representation is often much more intuitive, the
mathematical model gives us a means of building a formal description of a
FSM or, equivalently, of the behavior of a sequential system. The formal math-
ematical description is also much more compact, making it possible to describe
even very complex systems for which a state diagram would be unmanageable.
reset
count
number (italicized in the graphic) based on its distance from the inputs of the
design. Subsequently, gates have been numbered sequentially (gates numbers
are in boldface), in a way compatible with this partial ordeu: From this diagram
it is possible to write the corresponding assembly block:
Note, that there is a one-to-one correspondence between each instruction in
the assembly block and each gate in the logic network.
The assembly compiler can then take care of mapping the virtual registers
of the source code to the physical registers' set available on the specific sim-
ulating host. Multiple input gates can be easily handled by composing their
functionality through multiple operations. For instance, with reference to Ex-
ample 2.6, the 3-input XVOR of gate 7, can be translated as:
7. r7tmp = XOR(up, xl)
7bis. r7 = XNOR(r7tmp, xO)
At this point, simulation is performed by providing an input test vector, ex-
ecuting the assembly block, and reading the output values computed. Such
output values can be written to a separate file to be further inspected later to
verify the correctness of the results. Figure 2.10 shows an outline of the al-
gorithm, where the core loop includes the assembly code generated from the
network.
SCALABLE VERIFICATION WITH SYMBOLIC SIMULATION
Logic~Simulator(networkmode1)
{
assignbresent-state-signals, reset-state-pattern);
while (input-pattern != empty)
{
assign(input-signals, input-pattern);
CIRCUITASSEMBL Y;
output-values = read(output-signals);
state-values = read(next-state-signals);
write-simulation~output(output~values);
assignbresent-state-signals, state-values);
next input-pattern;
1
1
Figure 2.10: Pseudo-code for a cycle-based logic simulator
can, therefore, potentially miss subtle design errors that might only surface un-
der particular sets of rare conditions.
where Sk represents the transition function for the k-th bit. As it could be
imagined, the transition relation can be represented by a corresponding char-
acteristic function - see Definition 2.4 - XTR which equals 1 when TR(s,st)
holds true.
Finally, the image of a pair ( M ,R) can be defined using characteristic func-
tions. Given a set of states R with characteristic hnction XR, its image under
transition relation TR is the set Img having the characteristic function:
X ~ m (st)
g = 3s (XTR( 8 ,s') XR ( s ))
2.8 Summary
This chapter presented an overview of the design and verification flow in-
volved in the development of a digital integrated circuit. It discussed the main
techniques used to verify such circuits, namely functional validation (by means
of logic simulation) and formal verification, using a range of techniques.
We reviewed basic concepts and representations for Boolean functions and
for sequential systems, and described how a logic simulator works. The mod-
els discussed in the earlier sections will be needed to present all the main tech-
niques in the later chapters. The last part of the chapter was dedicated to skim
over a range of formal verification techniques, and give a sense of this method-
ology through the presentation of symbolic FSM traversal. The next chapter
covers in great detail another technique, symbolic simulation, and draws the
similarities between that and reachability analysis.
References
[AGL+95] Aharon Aharon, Dave Goodman, Moshe Levinger, Yossi Lichtenstein, Yossi
Malka, Charlotte Metzger, Moshe Molcho, and Gil Shurek. Test program
generation for functional verification of PowerPc processors in IBM. In DAC,
Proceedings of Design Automation Conference, pages 279-285, June 1995.
[BCOl] Randal E. Bryant and Yirng-An Chen. Verification of arithmetic circuits us-
ing binary moment diagrams. International Journal on Sofware Tools for
Technology Transfer, 3(2): 137-155,2001.
[BCL+94] Jerry R. Burch, Edward M. Clarke, David E. Long, Ken L. McMillan, and
David L. Dill. Symbolic model checking for sequential circuit verification.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and
Systems, 13(4):401-424, 1994.
[BCRR87] Zeev Barzilai, J. Lawrence Carter, Barry K. Rosen, and Joseph D. Rutledge.
HSS - a high-speed simulator. IEEE Transactionson Computer-AidedDesign
oflntegrated Circuits and Systems, pages 601-617, July 1987.
[BL92] Jerry R. Burch and David E. Long. Efficient Boolean function matching.
In ICCAD, Proceedings of the International Conference on Computer Aided
Design, pages 408-4 11, November 1992.
[BLW95] Beate Bollig, Martin Lobbing, and Ingo Wegener. Simulated annealing to
improve variable orderings for OBDDs. In International Workshop on Logic
Synthesis, pages 5.1-5.10, May 1995.
[BRB90] Karl Brace, Richard Rudell, and Randal E. Bryant. Efficient implementation
of a BDD package. In DAC, Proceedings of Design Automation Conference,
pages 40-45, 1990.
SCALABLE VERIFICATION WITH SYMBOLIC SIMULATION
Masahiro Fujita, Patrick McGeer, and Jerry Yang. Multi-terminal binary de-
cision diagrams: An efficient datastructure for matrix representation. Formal
Methods in System Design, lO(2-3):149-169, 1997.
[HKMO 11 Faisal I. Haque, Khizar A. Khan, and Jonathan Michelson. The Art of Verifi-
cation with Vera. Verification Central, 2001.
[HMNO1] Yoav Hollander, Matthew Morley, and Amos Noy. The e language: A fresh
separation of concerns. In Technology of Object-Oriented Languages and
Systems, volume TOOLS-38, pages 41-50, March 2001.