0% found this document useful (0 votes)
116 views47 pages

Asic Floorplaning

The document outlines the ASIC design process, focusing on the floorplanning stage, which involves arranging netlist blocks on a chip and optimizing performance metrics such as area and delay. It discusses the importance of a hierarchical netlist as input for floorplanning, the goals and objectives of the process, and various tools and techniques used for optimization. Additionally, it covers aspects like channel definition, routing, and I/O and power planning considerations.

Uploaded by

22256
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
116 views47 pages

Asic Floorplaning

The document outlines the ASIC design process, focusing on the floorplanning stage, which involves arranging netlist blocks on a chip and optimizing performance metrics such as area and delay. It discusses the importance of a hierarchical netlist as input for floorplanning, the goals and objectives of the process, and various tools and techniques used for optimization. Additionally, it covers aspects like channel definition, routing, and I/O and power planning considerations.

Uploaded by

22256
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

ASIC

Floorplanning

1
Concept MAP

2
ASIC Design Process

• S-1 Design Entry:


Schematic entry or HDL
description
• S-2: Logic Synthesis:
Using Verilog HDL or VHDL
and Synthesis tool, produce a
netlist- logic cells and their
interconnect detail
•S-3 System Partitioning: Divide a
large system into ASIC sized pieces
•S-4 Pre-Layout Simulation:
Check design functionality
•S-5 Floorplanning: Arrange
netlist blocks on the chip
•S-6 Placement: Fix cell locations
in a block
• S-7 Routing: Make the
cell and block interconnections
•S-8 Extraction: Measure the
interconnect R/C cost
•S-9 Post-Layout Simulation
3
Introduction
• The input to the floorplanning step - output of system partitioning and
design entry—a netlist.
• Netlist - describing circuit blocks, the logic cells within the blocks, and
their connections.

4
• T he s t a rt ing p i n t of floorplaning and
t h e vit e r b i de co d e r
placement steps for
•-collection of standard cells with no room set aside ye1t58for
routing.
The starting point of floorplaning and
placement steps for the viterbi decoder

• Small boxes that look like bricks - outlines of the standard cells.

• Largest standard cells, at the bottom of the display (labeled


dfctnb) - 188 D flipflops.
• '+' symbols -drawing origins of the standard cells—for the D flip-
flops they are shifted to the left and below the logic cell bottom
left-hand corner.

• Large box surrounding all the logic cells - estimated chip size.

• (This is a screen shot from Cadence Cell Ensemble.)

6
The viterbi decoder after floorplanning and
placement
7
The viterbi decoder after floorplanning
and placement

• 8 rows of standard cells separated by 17


horizontal channels (labeled 2–18).

• Channels are routed as numbered.

• In this example, the I/O pads are omitted to show the


cell placement more clearly.

8
Floorplanning Goals and Objectives
• The input to a floorplanning tool is a hierarchical netlist that describes
– the interconnection of the blocks (RAM, ROM, ALU, cache controller, and so on)
– the logic cells (NAND, NOR, D flip-flop, and so on) within the blocks
– the logic cell connectors (terminals , pins , or ports)

• The netlist is a logical description of the ASIC;


• The floorplan is a physical description of an ASIC.
• Floorplanning is a mapping between the logical description (the
netlist) and the physical description (the floorplan).

The Goals of Floorplanning are to:


• Arrange the blocks on a chip,
• Decide the location of the I/O pads,
• Decide the location and number of the power pads,
• Decide the type of power distribution, and
• Decide the location and type of clock distribution.

Objectives of Floorplanning –
To minimize the chip
area To minimize delay.
Measuring area is straightforward, but measuring delay is more
Measurement of Delay in Floor planning

Floor planning - To predict interconnect delay


by estimating interconnect length.
10
Measurement of Delay in Floor planning

11
Measurement of Delay in Floor planning
(contd.,)
• A floorplanning tool can use predicted-capacitance tables (also
known as interconnect-load tables or wire-load tables ).
• Typically between 60 and 70 percent of nets have a FO = 1.

• The distribution for a FO = 1 has a very long tail, stretching to


interconnects that run from corner to corner of the chip.
• The distribution for a FO = 1 often has two peaks,
corresponding to a distribution for close neighbors in subgroups
within a block, superimposed on a distribution corresponding to
routing between subgroups.

12
Measurement of Delay in Floor planning
(contd.,)
• We often see a twin-peaked distribution at the chip level also,
corresponding to separate distributions for interblock routing (inside
blocks) and intrablock routing (between blocks).

• The distributions for FO > 1 are more symmetrical and flatter than for
FO = 1.

• The wire-load tables can only contain one number, for example the
average net capacitance, for any one distribution.
• Many tools take a worst-case approach and use the 80- or 90-percentile
point instead of the average. Thus a tool may use a predicted
capacitance for which we know 90 percent of the nets will have less
166
than the estimated capacitance.
Measurement of Delay in Floor planning
(contd.,)
• Repeat the statistical analysis for blocks with different sizes.

For example, a net with a FO = 1 in a 25 k-gate block will have a different


(larger) average length than if the net were in a 5 k-gate block.

• The statistics depend on the shape (aspect ratio) of the block


(usually the statistics are only calculated for square blocks).

• The statistics will also depend on the type of netlist.


For example, the distributions will be different for a netlist generated by
setting a constraint for minimum logic delay during synthesis—which tends
to generate large numbers of two-input NAND gates—than for netlists
generated using minimum-area constraints.

14
15
Floorplanning - Optimization

Optimize Performance

• Chip area.
• Total wire length.
• Critical path delay.
• Routability.
• Others, e.g. noise, heat dissipation.

Cost = αA + βL,
Where
A = total area,
L = total wire length,
α and β constants.

16
Floorplanning
Area

•Deadspace

• Minimizing area = Minimizing


deadspace
• Wire length estimation
• Exact wire length not known until
after routing.
• Pin position not known.
• How to estimate?
• Center to center estimation. 170
Floorplanning Tools
• Flexible blocks (or variable blocks ) :
– Their total area is fixed,
– Their shape (aspect ratio) and connector locations may be adjusted during the placement.

• Fixed blocks:
– The dimensions and connector locations of the other fixed blocks (perhaps RAM, ROM,
compiled cells, or megacells) can only be modified when they are created.

• Seeding:
– Force logic cells to be in selected flexible blocks by seeding . We choose seed cells by name.
– Seeding may be hard or soft.
• Hard seed - fixed and not allowed to move during the remaining floor
planning and placement steps.
• Soft seed - an initial suggestion only and can be altered if necessary by the
floor planner.

• Seed connectors within flexible blocks—forcing certain nets to appear in a


specified order, or location at the boundary of a flexible block.

• Rat’s nest:-display the connection between the blocks


lines between connectors. 171
• Connections are shown as bundles between the centers of blocks or as flight
Aspect Ratio Bounds

No Bounds
•Block 4
•Block 3
•Block 2

•Block 1
• NOT
GOOD!!
With Bounds
lower bound ≤ height/width ≤ upper bound

•Soft Blocks
• Flexible shape
• I/O positions not yet determined

•Hard Blocks
• Fixed shape
• Fixed I/O pin positions 172
Sizing
example*

173
Floorplanning Tools

Floorplanning a cell-based ASIC.


(a) Initial floorplan generated by the floorplanning tool. Two of the blocks are flexible (A and C)
and contain rows of standard cells (unplaced). A pop-up window shows the status of block A.
(b) An estimated placement for flexible blocks A and C. The connector positions are known and a
rat’s nest display shows the heavy congestion below block B.
(c) Moving blocks to improve the floorplan.
(d) The updated display shows the reduced congestion after the changes.

174
•Aspect ratio and Congestion
Analysis

(a) The initial floorplan with a 2:1.5 die aspect ratio.


(b) Altering the floorplan to give a 1:1 chip aspect ratio.
Congestion analysis-One measure of congestion is the difference between the number of
interconnects that we actually need, called the channel density , and the channel capacity
(c) A trial floorplan with a congestion map. Blocks A and C have been placed so that we know the terminal
positions in the channels. Shading indicates the ratio of channel density to the channel capacity. Dark areas
show regions that cannot be routed because the channel congestion exceeds the estimated capacity. 175
(d) Resizing flexible blocks A and C alleviates congestion.
Channel Definition

• Channel definition or channel allocation


• During the floorplanning step, assign the areas between blocks that are to be
used for interconnect.
• Routing a T-junction between two channels in two-level metal.
• The dots represent logic cell pins.
• (a) Routing channel A (the stem of the T) first allows us to adjust the width of channel
B. (b) If we route channel B first (the top of the T), this fixes the width of channel A.
176
•Route the stem of a T-junction before route the
top.
Channel Routing

• Defining the channel routing order for a slicing floorplan using a slicing tree.
• (a) Make a cut all the way across the chip between circuit blocks. Continue slicing until
each piece contains just one circuit block. Each cut divides a piece into two without cutting
through a circuit block.
• (b) A sequence of cuts: 1, 2, 3, and 4 that successively slices the chip until only circuit
blocks are left.
• (c) The slicing tree corresponding to the sequence of cuts gives the order in which to route
the channels: 4, 3, 2, and finally 1.
24
Slicing Floorplan and General Floorplan
•Slicing floorplan •v
•5 •h •h
•1 •3
•1 •2 •v •v
•6
•3 •h •4 •7
•2
•4 •7 •5 •6
•Slicing
Tree
•non-slicing floorplan

25
Area Utilization
• Area utilization
– Depends on how nicely the rigid modules’ shapes are
matched
– Soft modules can take different shapes to “fill in”
empty slots
– Floorplan sizing
•m

• m3
•1 •3 • m1 • m1
•2 32 • m1

• m2
• m4

•m
•4

4
• m7
•m

• m7

• m6
•7 •6
•5 • m7 6• m5 m7
•m • m5
• m
7 1979 = 38
•Area = 20x22 = 440•Area = 20x1
Slicing Floorplan Sizing
• Bottom-up process
– Has to be done per floorplan perturbation
– Requires O(n) time (N is the # of shapes of all
modules)•V •H

•L •R •T •B

•yj •max(bi, y ) •b
•b i• i •b + y
i•ai •x j i j
j
•a + x a •x •yj
i j j
•max(a , x ) 180
i j
Slicing Floorplan Sizing
• Simple case: all modules are hard macros
– No rotation allowed, one shape only
•3
•1 •2
•4 •1234567
•17x16
•7 •6 •5

•167 •2345
•9x15 •8x16
•m3 •m4

•m1 •67 •1 •234 •5


•9x7 •8x11
•m2

•8x8 •7x5
•m6

•6 •7 •2 •34
m
• •m5 •4x11
•4x7 •5x4 •4x8
7 •3 •
1
•3x6 •4x5
•Slicing Floorplan Sizing
General case: all modules are soft macros
❖ Stockmeyer’s work (1983) for optimal module orientation
❖ Non-slicing = NP complete
❖ Slicing = polynomial time solvable with dynamic programming
Phase 1: bottom-up
❖ Input: floorplan tree, modules shapes
❖ Start with sorted shapes lists of modules
❖ Perform Vertical_Node_Sizing & Horizontal_Node_Sizing
❖ When get to the root node, we have a list of shapes. Select the one
that is best in terms of area
Phase 2: top-down
❖ Traverse the floorplan tree and set module locations

182
Sizing Example
•A •B •a1 •a2 •a3

•4x6 •5x5 •6x4

•b1 •b1
•b1 •b1 •a3
•a1 •a2
•2x7 •6x7 •7x7 •8x7

•b2 •a1 •a2 •b2


•b2 •a3 •b2
•3x4 •7x6 •8x5 •9x4

•a1 •a2 •a3


•b3 •b3 •b3 •b3 183
•4x2 •8x6 •9x5 •10x4
Cyclic Constraints

• Cyclic constraints.
• (a) A nonslicing floorplan with a cyclic constraint that prevents channel routing.
(b) In this case it is difficult to find a slicing floorplan without increasing the chip
area.
• (c) This floorplan may be sliced (with initial cuts 1 or 2) and has no cyclic
constraints, but it is inefficient in area use and will be very difficult to route.

31
Cyclic Constraints


•(a) We can eliminate the cyclic constraint by merging the blocks A and
C.
•(b) A slicing structure.

32
I/O and Power Planning (contd.,)
• Every chip communicates with the outside world.

• Signals flow onto and off the chip and we need to supply
power.

• We need to consider the I/O and power constraints early


in the floorplanning process.

• A silicon chip or die (plural die, dies, or dice) is


mounted on a chip carrier inside a chip package .
Connections are made by bonding the chip pads to
fingers on a metal lead frame that is part of the package.

• The metal lead-frame fingers connect to the package pins


. A die consists of a logic core inside a pad ring .

• On a pad-limited die we use tall, thin pad-limited pads ,


which maximize the number of pads we can fit around
the outside of the chip.
• On a core-limited die we use short, wide core-limited
pads . 186
34
I/O and Power Planning

• FIGURE 16.12 Pad-limited and core-limited die. (a) A pad-limited die. The
number of pads determines the die size. (b) A core-limited die: The core logic
determines the die size. (c) Using both pad-limited pads and core-limited pads for a
square die.

35
I/O and Power Planning (contd.,)
• Special power pads are used for:1. positive supply, or VDD, power buses
(or power rails ) and
2. ground or negative supply, VSS or GND.

– one set of VDD/VSS pads supplies power to the I/O pads only.

– Another set of VDD/VSS pads connects to a second power ring that supplies the logic core.

• I/O power as dirty power


– It has to supply large transient currents to the output transistors.
– Keep dirty power separate to avoid injecting noise into the internal-logic power (the
clean power ).

• I/O pads also contain special circuits to protect against electrostatic discharge
( ESD ).
– These circuits can withstand very short high-voltage (several kilovolt) pulses that can be
generated during human or machine handling.

36
I/O and Power Planning (contd.,)
• If we make an electrical connection between the substrate and a chip pad, or to a
package pin, it must be to VDD ( n -type substrate) or VSS ( p -type substrate). This
substrate connection (for the whole chip) employs a down bond (or drop bond) to the
carrier. We have several options:

We can dedicate one (or more) chip pad(s) to down bond to the chip carrier.

We can make a connection from a chip pad to the lead frame and down bond
from the chip pad to the chip carrier.

We can make a connection from a chip pad to the lead frame and down bond from
the lead frame.

We can down bond from the lead frame without using a chip pad.

We can leave the substrate and/or chip carrier unconnected.

• Depending on the package design, the type and positioning of down bonds may be fixed.
This means we need to fix the position of the chip pad for down bonding using a pad
seed

37
I/O and Power Planning (contd.,)
• A double bond connects two pads to one chip-carrier finger and one package
pin. We can do this to save package pins or reduce the series inductance of
bond wires (typically a few nanohenries) by parallel connection of the pads.

• To reduce the series resistive and inductive impedance of power supply


networks, it is normal to use multiple VDD and VSS pads.

• This is particularly important with the simultaneously switching outputs (


SSOs ) that occur when driving buses

– The output pads can easily consume most of the power on a CMOS ASIC, because the load on
a pad (usually tens of picofarads) is much larger than typical on-chip capacitive loads.

– Depending on the technology it may be necessary to provide dedicated VDD and VSS pads
for every few SSOs. Design rules set how many SSOs can be used per VDD/VSS pad pair. These
dedicated VDD/VSS pads must “follow” groups of output pads as they are seeded or planned
on the floorplan.

38
I/O and Power Planning (contd.,)
• Using a pad mapping, we translate the logical pad in a netlist to a physical
pad from a pad library. We might control pad seeding and mapping in the
floorplanner.

• There are several nonobvious factors that must be considered when


generating a pad ring:

• Design library pad cells for one orientation.


– For example, an edge pad for the south side of the chip, and a corner pad for
the southeast corner.
– Generate other orientations by rotation and flipping (mirroring).
– Some ASIC vendors will not allow rotation or mirroring of logic cells in the mask
file. To avoid these problems we may need to have separate horizontal, vertical,
left- handed, and right-handed pad cells in the library with appropriate logical to
physical pad mappings.

• Mixing of pad-limited and core-limited edge pads in the same pad


ring complicates the design of corner pads.
– In this case a corner pad also becomes a pad-format changer, or hybrid corner pad .

• In single-supply chips we have one VDD net and one VSS net, both
global power nets . It is also possible to use mixed power supplies
(for example, 3.3 V and 5 V) or multiple power supplies ( digital VDD,
analog VDD). 39
I/O and Power Planning (contd.,)

• FIGURE 16.13 Bonding pads. (a) This chip uses both pad-limited and core-limited pads. (b) A
hybrid corner pad. (c) A chip with stagger-bonded pads. (d) An area-bump bonded chip (or flip-chip).
The chip
turned is
upside down and solder bumps connect the pads to the lead 193
frame
I/O and Power Planning (contd.,)
• stagger-bond arrangement using two rows of I/O pads.
– In this case the design rules for bond wires (the spacing and the angle at which the
bond wires leave the pads) become very important.

• Area-bump bonding arrangement (also known as flip-chip, solder-


bump) used, for example, with ball-grid array ( BGA ) packages.

– Even though the bonding pads are located in the center of the chip, the I/O circuits
are still often located at the edges of the chip because of difficulties in power
supply distribution and integrating I/O circuits together with logic in the center of
the die.

• In an MGA, the pad spacing and I/O-cell spacing is fixed—each pad


occupies a fixed pad slot (or pad site ). This means that the properties
of the pad I/O are also fixed but, if we need to, we can parallel
adjacent output cells to increase the drive. To increase flexibility
further the I/O cells can use a separation, the I/O-cell pitch , that is
smaller than the pad pitch .
194
I/O and Power Planning (contd.,)

• FIGURE 16.14Gate-array I/O pads. (a)


Cell-based ASICs may contain pad cells of different
sizes and widths. (b) A corner of a gate-array base.
(c) A gate-array base with different I/O cell and 195
pad pitches
I/O and Power Planning (contd.,)

• The long direction of a rectangular channel is the channel spine .

• Some automatic routers may require that metal lines parallel to a channel
spine use a preferred layer (either m1, m2, or m3). Alternatively we say that
a particular metal layer runs in a preferred direction .

196
I/O and Power Planning (contd.,)

•FIGURE 16.15 Power distribution. (a) Power distributed using m1 for VSS and m2 for VDD. This
helps
minimize the number of vias and layer crossings needed but causes problems in the routing channels.
(b) In this floorplan m1 is run parallel to the longest side of all channels, the channel spine. This can
make automatic routing easier but may increase the number of vias and layer crossings. (c) An
expanded view of part of a channel (interconnect is shown as lines). If power runs on different layers
along the spine of a channel, this forces signals to change layers. (d) A closeup of VDD and VSS
Power distribution.
• (a) Power distributed using m1 for VSS and m2 for VDD.
– This helps minimize the number of vias and layer crossings needed
– but causes problems in the routing channels.

• (b) In this floorplan m1 is run parallel to the longest sideof


all
channels, the channel spine.
– This can make automatic routing easier
– but may increase the number of vias and layer crossings.

• (c) An expanded view of part of a channel (interconnect is shown as


lines). If power runs on different layers along the spine of a channel,
this forces signals to change layers.

• (d) A closeup of VDD and VSS buses as they cross. Changing layers
requires a large number of via contacts to reduce resistance.
45
Clock Planning
• clock spine routing scheme with all clock pins driven directly from the clock
driver. MGAs and FPGAs often use this fish bone type of clock distribution
scheme
• clock skew and clock
•FIGURE 16.16 Clock distribution.
latency
•(a) A clock spine for a gate array.
• (b) A clock spine for a cell-based ASIC
(typical chips have thousands of clock
nets).
• (c) A clock spine is usually driven
from one or more clock-driver cells.
Delay in the driver cell is a function of
the number of stages and the ratio of
output to input capacitance for each
stage (taper).
• (d) Clock latency and clock skew. We
would like to minimize both latency and
skew.

46
Clock Planning (cont.,)
• FIGURE 16.17 A clock tree. (a) Minimum delay is achieved when the taper of
successive stages is about 3. (b) Using a fanout of three at successive nodes.
(c) A clock tree for the cell-based ASIC of Figure 16.16 b. We have to balance
the clock arrival times at all of the leaf nodes to minimize clock skew.

47

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy