0% found this document useful (0 votes)
4 views109 pages

02.EECE 345 Computer Architecture ISA Design

The document discusses the principles and trade-offs of computer architecture, emphasizing the importance of adapting designs to meet evolving marketplace demands and technological advancements. It covers key concepts such as the Von Neumann model, dataflow model, and the distinctions between Instruction Set Architecture (ISA) and microarchitecture. The document also highlights the significance of design considerations in achieving optimal performance, cost, and energy efficiency in computing systems.

Uploaded by

ahmed27042004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views109 pages

02.EECE 345 Computer Architecture ISA Design

The document discusses the principles and trade-offs of computer architecture, emphasizing the importance of adapting designs to meet evolving marketplace demands and technological advancements. It covers key concepts such as the Von Neumann model, dataflow model, and the distinctions between Instruction Set Architecture (ISA) and microarchitecture. The document also highlights the significance of design considerations in achieving optimal performance, cost, and energy efficiency in computing systems.

Uploaded by

ahmed27042004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 109

EECE345 Computer Architecture

Spring 2025

Jan , 2025
American University in Dubai
EECE345 Computer Architecture
Lecture II

Vinod Pangracious
School of Engineering
Chapter II: Computer Architecture

ISA Principles, Design and Tradeoffs

3
Computer Architecture

• Computer architecture is concerned with how best to exploit


fabrication technology to meet marketplace demands.
• e.g., how best might we use five billion transistors and a power budget of two
watts to design the chip at the heart of a mobile phone?
• Computer architecture builds on a few simple concepts, but is
challenging as we must constantly seek new solutions.
• What constitutes the “best” design changes over time and depending
on our use-case. It involves considering many different trade-offs.

4
Computer Architecture

Markets
• Each level of design imposes different requirements
and constraints, which change over time. Applications
• History and economics: there is commercial pressure Operating Systems
to evolve in a way that minimizes disruption and Programming languages
possible costs to the ecosystem (e.g., software). and compilers
• There is also a need to look forward and not design for Architecture
yesterday’s technology and workloads! Microarchitecture
• Design decisions should be carefully justified through Hardware
experimentation. Fabrication Technology

5
Ex: The Smartphone
Camera
Sensor hub
Touchscreen & CORTEX-M Power
sensor hub CORTEX-M management
CORTEX-M
CORTEX-A
CORTEX-M
Apps processor Flash controller
CORTEX-M
CORTEX-A
CORTEX-M
GPS
2G/3G/4G/5G CORTEX-M
CORTEX-A
CORTEX-R Bluetooth
CORTEX-M
CORTEX-M
Wi-Fi
CORTEX-R
6 CORTEX-M
What is A Computer?

• Three key components

• Computation
• Communication
• Storage (memory)

7
What is A Computer?
• We will cover all three components

Processing
control Memory
(sequencing) (program I/O
and data)
datapath

8
The Von Neumann
Model/Architecture
• Also called stored program computer (instructions in memory). Two key
properties:
• Stored program
• Instructions stored in a linear memory array
• Memory is unified betweenWheninstructions and data as an instruction?
is a value interpreted
• The interpretation of a stored value depends on the control signals

• Sequential instruction processing


• One instruction processed (fetched, executed, and completed) at a time
• Program counter (instruction pointer) identifies the current instr.
• Program counter is advanced sequentially except for control transfer instructions

9
The Von Neumann
Model/Architecture
• Recommended reading
• Burks, Goldstein, von Neumann, “Preliminary discussion of the logical design of an
electronic computing instrument,” 1946.
• Patt and Patel book, Chapter 4, “The von Neumann Model”

• Stored program

• Sequential instruction processing

10
The Von Neumann Model (of a Computer)

MEMORY
Mem Addr Reg

Mem Data Reg

PROCESSING UNIT
INPUT OUTPUT
ALU TEMP

CONTROL UNIT

IP Inst Register
11
The Von Neumann Model (of a
Computer)

• Q: Is this the only way that a computer can operate?

• A: No.
• Qualified Answer: But, it has been the dominant way
• i.e., the dominant paradigm for computing
• for N decades

12
The Dataflow Model (of a Computer)
• Von Neumann model: An instruction is fetched and executed in control flow order
• As specified by the instruction pointer
• Sequential unless explicit control flow instruction

• Dataflow model: An instruction is fetched and executed in data flow order


• i.e., when its operands are ready
• i.e., there is no instruction pointer
• Instruction ordering specified by data flow dependence
• Each instruction specifies “who” should receive the result
• An instruction can “fire” whenever all operands are received
• Potentially many instructions can execute at the same time
• Inherently more parallel
13
Von Neumann vs Dataflow

 Consider a Von Neumann program


 What is the significance of the program order? a b
 What is the significance of the storage locations?

+ *2
v <= a + b;
w <= b * 2;
x <= v - w
y <= v + w - +
z <= x * y
Sequential Dataflow *

z
 Which model is more natural to you as a programmer?
14
More on Data Flow

• In a data flow machine, a program consists of data flow nodes


• A data flow node fires (fetched and executed) when all it inputs are ready
• i.e. when all inputs have tokens

• Data flow node and its ISA representation

15
Data Flow Nodes

16
An Example Data Flow Program

OUT

17
ISA-level Tradeoff: Instruction Pointer

Do we need an instruction pointer in the ISA?


Yes: Control-driven, sequential execution
 An instruction is executed when the IP points to it
 IP automatically changes sequentially (except for control flow instructions)
No: Data-driven, parallel execution
 An instruction is executed when all its operand values are available (data flow)

Tradeoffs: MANY high-level ones


Ease of programming (for average programmers)?
Ease of compilation?
Performance: Extraction of parallelism?
Hardware complexity
18
ISA vs. Microarchitecture Level
Tradeoff
• A similar tradeoff (control vs. data-driven execution) can be made at the
microarchitecture level

• ISA: Specifies how the programmer sees instructions to be executed


• Programmer sees a sequential, control-flow execution order vs.
• Programmer sees a data-flow execution order

• Microarchitecture: How the underlying implementation actually executes instructions


• Microarchitecture can execute instructions in any order as long as it obeys the semantics
specified by the ISA when making the instruction results visible to software
• Programmer should see the order specified by the ISA

19
Let’s Get Back to the Von Neumann Model

• But, if you want to learn more about dataflow…

• Dennis and Misunas, “A preliminary architecture for a basic data-flow


processor,” ISCA 1974.
• Gurd et al., “The Manchester prototype dataflow computer,” CACM 1985.

20
The Von-Neumann Model
• All major instruction set architectures today use this model
• x86, ARM, MIPS, SPARC, Alpha, POWER

• Underneath (at the microarchitecture level), the execution model of almost all
implementations (or, microarchitectures) is very different
• Pipelined instruction execution: Intel 80486 uarch
• Multiple instructions at a time: Intel Pentium uarch
• Out-of-order execution: Intel Pentium Pro uarch
• Separate instruction and data caches

• But, what happens underneath that is not consistent with the von Neumann model is not
exposed to software
• Difference between ISA and microarchitecture

21
What is Computer Architecture?

• ISA+implementation definition: The science and art of designing, selecting,


and interconnecting hardware components and designing the
hardware/software interface to create a computing system that meets
functional, performance, energy consumption, cost, and other specific goals.

• Traditional (ISA-only) definition: “The term architecture is used here to


describe the attributes of a system as seen by the programmer, i.e., the
conceptual structure and functional behavior as distinct from the organization
of the dataflow and controls, the logic design, and the physical
implementation.” Gene Amdahl, IBM Journal of R&D, April 1964
22
Agenda for Today

• Deep dive into ISA and its tradeoffs

23
Last Lecture Recap
• Levels of Transformation
• Algorithm, ISA, Microarchitecture
• Moore’s Law
• What is Computer Architecture
• Why Study Computer Architecture
• Fundamental Concepts
• Von Neumann Model
• Dataflow Model
• ISA vs. Microarchitecture
• Digital system Design (ALU Architecture)
• Assignments: 1, 2 and 3
24
Review: ISA vs. Microarchitecture
Problem
• ISA Algorithm
• Agreed upon interface between software and hardware Program
• SW/compiler assumes, HW promises ISA
• What the software writer needs to know to write and Microarchitecture
debug system/user programs Circuits
• Microarchitecture Electrons
• Specific implementation of an ISA
• Not visible to the software
• Microprocessor
• ISA, uarch, circuits
• “Architecture” = ISA + microarchitecture
25
Review: ISA
• Instructions
• Opcodes, Addressing Modes, Data Types
• Instruction Types and Formats
• Registers, Condition Codes

• Memory
• Address space, Addressability, Alignment
• Virtual memory management

• Call, Interrupt/Exception Handling


• Access Control, Priority/Privilege
• I/O: memory-mapped vs. instr.
• Task/thread Management
• Power and Thermal Management
• Multi-threading support, Multiprocessor support
26
Microarchitecture

• Implementation of the ISA under specific design constraints and goals


• Anything done in hardware without exposure to software
• Pipelining
• In-order versus out-of-order instruction execution
• Memory access scheduling policy
• Speculative execution
• Superscalar processing (multiple instruction issue?)
• Clock gating
• Caching? Levels, size, associativity, replacement policy
• Prefetching?
• Voltage/frequency scaling?
• Error correction?

27
Property of ISA vs. Uarch?

• ADD instruction’s opcode


• Number of general purpose registers
• Number of ports to the register file
• Number of cycles to execute the MUL instruction
• Whether or not the machine employs pipelined instruction execution

• Remember
• Microarchitecture: Implementation of the ISA under specific design constraints and goals

28
Design Point
• A set of design considerations and their importance Problem
• leads to tradeoffs in both ISA and uarch Algorithm
• Considerations Program
• Cost ISA
• Performance Microarchitecture
• Maximum power consumption Circuits
• Energy consumption (battery life) Electrons
• Availability
• Reliability and Correctness
• Time to Market

• Design point determined by the “Problem” space (application


space), the intended users/market 29
Application Space

• Dream, and they will appear…

30
Tradeoffs: Soul of Computer
Architecture

• ISA-level tradeoffs

• Microarchitecture-level tradeoffs

• System and Task-level tradeoffs


• How to divide the labor between hardware and software

• Computer architecture is the science and art of making the appropriate trade-offs
to meet a design point
• Why art? 31
Why Is It (Somewhat) Art?

New demands Problem


from the top Algorithm
(Look Up) New demands and
Program/Language User
personalities of users
(Look Up)
Runtime System
(VM, OS, MM)
ISA
Microarchitecture
New issues and Logic
capabilities
Circuits
at the bottom
(Look Down) Electrons

 We do not (fully) know the future (applications, users,


market) 32
Why Is It (Somewhat) Art?

Changing demands Problem


at the top Algorithm
(Look Up and Forward) Changing demands and
Program/Language User
personalities of users
(Look Up and Forward)
Runtime System
(VM, OS, MM)
ISA
Microarchitecture
Changing issues and Logic
capabilities
Circuits
at the bottom
(Look Down and Forward) Electrons

 And, the future is not constant (it changes)!


33
How Can We Adapt to the Future

• This is part of the task of a good computer architect

• Many options (bag of tricks)


• Keen insight and good design
• Good use of fundamentals and principles
• Efficient design
• Heterogeneity
• Reconfigurability
• …
• Good use of the underlying technology
•…
34
Many Different ISAs Over Decades
• x86
• PDP-x: Programmed Data Processor (PDP-11)
• VAX
• IBM 360
• CDC 6600
• SIMD ISAs: CRAY-1, Connection Machine
• VLIW ISAs: Multiflow, Cydrome, IA-64 (EPIC)
• PowerPC, POWER
• RISC ISAs: Alpha, MIPS, SPARC, ARM

• What are the fundamental differences?


• E.g., how instructions are specified and what they do
35
• E.g., how complex are the instructions
Instruction
• Basic element of the HW/SW interface
• Consists of
• opcode: what the instruction does
• operands: who it is to do it to

• Example from the Alpha ISA:

36
MIPS

0 rs rt rd shamt funct R-type


6-bit 5-bit 5-bit 5-bit 5-bit 6-bit

opcode rs rt immediate I-type


6-bit 5-bit 5-bit 16-bit

opcode immediate J-type


6-bit 26-bit

37
ARM

38
Set of Instructions, Encoding, and
Spec

• Example from LC-3b ISA


• x86 Manual

• Why unused instructions?


• Aside: concept of “bit
steering”
• A bit in the instruction
determines the
interpretation of other bits

39
40
Bit Steering in Alpha

41
What Are the Elements of An ISA?
• Instruction sequencing model
• Control flow vs. data flow
• Tradeoffs?

• Instruction processing style


• Specifies the number of “operands” an instruction “operates” on and how it does
so
• 0, 1, 2, 3 address machines
• 0-address: stack machine (op, push A, pop A)
• 1-address: accumulator machine (op ACC, ld A, st A)
• 2-address: 2-operand machine (op S,D; one is both source and dest)
• 3-address: 3-operand machine (op S1,S2,D; source and dest separate)
• Tradeoffs?
• Larger operate instructions vs. more executed operations
• Code size vs. execution time vs. on-chip memory space 42
An Example: Stack Machine
+ Small instruction size (no operands needed for operate instructions)
• Simpler logic
• Compact code

+ Efficient procedure calls: all parameters on stack


• No additional cycles for parameter passing

-- Computations that are not easily expressible with “postfix notation” are
difficult to map to stack machines
• Cannot perform operations on many values at the same time (only top N values on the
stack at the same time)
• Not flexible
43
An Example: Stack Machine (II)

Koopman, “Stack Computers:


The New Wave,” 1989.
http://www.ece.cmu.edu/~koo
pman/stack_computers/sec3
_2.html

44
An Example: Stack Machine
Operation

Koopman, “Stack Computers:


The New Wave,” 1989.
http://www.ece.cmu.edu/~koo
pman/stack_computers/sec3
_2.html

45
Other Examples
• PDP-11: A 2-address machine
• PDP-11 ADD: 4-bit opcode, 2 6-bit operand specifiers
• Why? Limited bits to specify an instruction
• Disadvantage: One source operand is always clobbered with the result of the
instruction
• How do you ensure you preserve the old value of the source?

• X86: A 2-address (memory/memory) machine


• Alpha: A 3-address (load/store) machine
• MIPS?
• ARM?
46
What Are the Elements of An ISA?

• Instructions Why are there different addressing modes


• Opcode
• Operand specifiers (addressing modes)
• How to obtain the operand?

• Data types
• Definition: Representation of information for which there are instructions
that operate on the representation
• Integer, floating point, character, binary, decimal, BCD
• Doubly linked list, queue, string, bit vector, stack
• VAX: INSQUEUE and REMQUEUE instructions on a doubly linked list or queue; FINDFIRST
• Digital Equipment Corp., “VAX11 780 Architecture Handbook,” 1977.
• X86: SCAN opcode operates on character strings; PUSH/POP 47
Data Type Tradeoffs
• What is the benefit of having more or high-level data types in the ISA?
• What is the disadvantage?

• Think compiler/programmer vs. micro-architect

• Concept of semantic gap


• Data types coupled tightly to the semantic level, or complexity of instructions

• Example: Early RISC architectures vs. Intel 432


• Early RISC: Only integer data type
• Intel 432: Object data type, capability based machine
48
What Are the Elements of An ISA?
• Memory organization
• Address space: How many uniquely identifiable locations in memory
• Addressability: How much data does each uniquely identifiable location store
• Byte addressable: most ISAs, characters are 8 bits
• Bit addressable: Burroughs 1700. Why?
• 64-bit addressable: Some supercomputers. Why?
• 32-bit addressable: First Alpha
• Food for thought
• How do you add 2 32-bit numbers with only byte addressability?
• How do you add 2 8-bit numbers with only 32-bit addressability?
• Big endian vs. little endian? MSB at low or high byte.

• Support for virtual memory


49
Some Historical Readings

• If you want to dig deeper

• Wilner, “Design of the Burroughs 1700,” AFIPS 1972.

• Levy, “The Intel iAPX 432,” 1981.


• http://www.cs.washington.edu/homes/levy/capabook/Chapter9.pdf

50
What Are the Elements of An ISA?

• Registers
• How many
• Size of each register

• Why is having registers a good idea?


• Because programs exhibit a characteristic called data locality
• A recently produced/accessed value is likely to be used more than once
(temporal locality)
• Storing that value in a register eliminates the need to go to memory each time that value
is needed

51
Programmer Visible (Architectural)
State

M[0]
M[1]
M[2]
M[3] Registers
M[4] - given special names in the ISA
(as opposed to addresses)
- general vs. special purpose

M[N-1] Program Counter


Memory memory address
array of storage locations of the current instruction
indexed by an address
Instructions (and programs) specify how to transform
the values of programmer visible state
52
Aside: Programmer Invisible State

• Microarchitectural state
• Programmer cannot access this directly

• E.g. cache state


• E.g. pipeline registers

53
Evolution of Register Architecture
• Accumulator
• a legacy from the “adding” machine days

• Accumulator + address registers


• need register indirection
• initially address registers were special-purpose, i.e., can only be loaded
with an address for indirection
• eventually arithmetic on addresses became supported

• General purpose registers (GPR)


• all registers good for all purposes
• grew from a few registers to 32 (common for RISC) to 128 in Intel IA-64 54
Instruction Classes

• Operate instructions
• Process data: arithmetic and logical operations
• Fetch operands, compute result, store result
• Implicit sequential control flow

• Data movement instructions


• Move data between memory, registers, I/O devices
• Implicit sequential control flow

• Control flow instructions


• Change the sequence of instructions that are executed
55
What Are the Elements of An ISA?
• Load/store vs. memory/memory architectures

• Load/store architecture: operate instructions operate only on registers


• E.g., MIPS, ARM and many RISC ISAs

• Memory/memory architecture: operate instructions can operate on


memory locations
• E.g., x86, VAX and many CISC ISAs

56
What Are the Elements of An ISA?
• Addressing modes specify how to obtain the operands
• Absolute LW rt, 10000
use immediate value as address
• Register Indirect: LW rt, (rbase)
use GPR[rbase] as address
• Displaced or based: LW rt, offset(rbase)
use offset+GPR[rbase] as address
• Indexed: LW rt, (rbase, rindex)
use GPR[rbase]+GPR[rindex] as address
• Memory Indirect LW rt ((rbase))
use value at M[ GPR[ rbase ] ] as address
• Auto inc/decrement LW Rt, (rbase)
use GRP[rbase] as address, but inc. or dec. GPR[rbase] each time 57
What Are the Benefits of Different
Addressing Modes?

• Another example of programmer vs. micro-architect tradeoff


• Advantage of more addressing modes:
• Enables better mapping of high-level constructs to the machine: some accesses
are better expressed with a different mode  reduced number of instructions
and code size
• Think array accesses (autoincrement mode)
• Think indirection (pointer chasing)
• Sparse matrix accesses
• Disadvantage:
• More work for the compiler
• More work for the microarchitect
58
ISA Orthogonality

• Orthogonal ISA:
• All addressing modes can be used with all instruction types
• Example: VAX
• (~13 addressing modes) x (>300 opcodes) x (integer and FP formats)

• Who is this good for?


• Who is this bad for?

59
Is the LC-3b ISA Orthogonal?

Orthogonal Architecture

60
LC-3b: Addressing Modes of ADD

61
LC-3b: Addressing Modes of of JSR(R)

62
What Are the Elements of An ISA?
• How to interface with I/O devices
• Memory mapped I/O
• A region of memory is mapped to I/O devices
• I/O operations are loads and stores to those locations

• Special I/O instructions


• IN and OUT instructions in x86 deal with ports of the chip

• Tradeoffs?
• Which one is more general purpose?

63
What Are the Elements of An ISA?
• Privilege modes
• User vs supervisor
• Who can execute what instructions?

• Exception and interrupt handling


• What procedure is followed when something goes wrong with an instruction?
• What procedure is followed when an external device requests the processor?
• Vectored vs. non-vectored interrupts (early MIPS, ARM)

• Virtual memory
• Each program has the illusion of the entire memory space, which is greater than
physical memory

64
Another Question or Two

• Does the LC-3b ISA contain complex instructions?

• How complex can an instruction be?

65
Complex vs. Simple Instructions
• Complex instruction: An instruction does a lot of work, e.g. many
operations
• Insert in a doubly linked list
• Compute FFT
• String copy

• Simple instruction: An instruction does small amount of work, it is a


primitive using which complex operations can be built
• Add
• XOR
• Multiply
66
Complex vs. Simple Instructions
• Advantages of Complex instructions
+ Denser encoding  smaller code size  better memory utilization, saves off-chip
bandwidth, better cache hit rate (better packing of instructions)
+ Simpler compiler: no need to optimize small instructions as much

• Disadvantages of Complex Instructions


- Larger chunks of work  compiler has less opportunity to optimize (limited in
fine-grained optimizations it can do)
- More complex hardware  translation from a high level to control signals and
optimization needs to be done by hardware

67
ISA-level Tradeoffs: Semantic Gap
• Where to place the ISA? Semantic gap
• Closer to high-level language (HLL)  Small semantic gap, complex
instructions
• Closer to hardware control signals?  Large semantic gap, simple instructions

• RISC vs. CISC machines


• RISC: Reduced instruction set computer
• CISC: Complex instruction set computer
• FFT, QUICKSORT, POLY, FP instructions?
• VAX INDEX instruction (array access with bounds checking)
• INDEX takes six operands
• CHK2 in Motorola 68000
68
ISA-level Tradeoffs: Semantic Gap
• Some tradeoffs (for you to think about)

• Simple compiler, complex hardware vs.


complex compiler, simple hardware
• Caveat: Translation (indirection) can change the tradeoff!

• Burden of backward compatibility

• Performance? Energy Consumption?


• Optimization opportunity: Example of VAX INDEX instruction: who
(compiler vs. hardware) puts more effort into optimization?
• Instruction size, code size
69
X86: Small Semantic Gap: String Operations

• An instruction operates on a string


• Move one string of arbitrary length to another location
• Compare two strings (part of Assignment 1)

• Enabled by the ability to specify repeated execution of an instruction (in the ISA)
• Using a “prefix” called REP prefix

• Example: REP MOVS instruction


• Only two bytes: REP prefix byte and MOVS opcode byte (F2 A4)
• Implicit source and destination registers pointing to the two strings (ESI, EDI)
• Implicit count register (ECX) specifies how long the string is
70
X86: Small Semantic Gap: String Operations
REP MOVS (DEST SRC)

How many instructions does this take in ARM7? (Assignment 4) 71


Small Semantic Gap Examples in
VAX
• FIND FIRST
• Find the first set bit in a bit field
• Helps OS resource allocation operations
• SAVE CONTEXT, LOAD CONTEXT
• Special context switching instructions
• INSQUEUE, REMQUEUE
• Operations on doubly linked list
• INDEX
• Array access with bounds checking
• STRING Operations
• Compare strings, find substrings, …
• Cyclic Redundancy Check Instruction
• EDITPC
• Implements editing functions to display fixed format output

• Digital Equipment Corp., “VAX11 780 Architecture Handbook,” 1977-78. 72


Small versus Large Semantic Gap
• CISC vs. RISC
• Complex instruction set computer  complex instructions
• Initially motivated by “not good enough” code generation
• Reduced instruction set computer  simple instructions
• John Cocke, mid 1970s, IBM 801
• Goal: enable better compiler control and optimization

• RISC motivated by
• Memory stalls (no work done in a complex instruction when there is a memory
stall?)
• When is this correct?
• Simplifying the hardware  lower cost, higher frequency
• Enabling the compiler to optimize the code better
• Find fine-grained parallelism to reduce stalls
73
An Aside
• An Historical Perspective on RISC Development at IBM
• http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/ris
c/

74
How High or Low Can You Go?

• Very large semantic gap


• Each instruction specifies the complete set of control signals in the
machine
• Compiler generates control signals
• Open microcode (John Cocke, circa 1970s)
• Gave way to optimizing compilers

• Very small semantic gap


• ISA is (almost) the same as high-level language
• Java machines, LISP machines, object-oriented machines, capability-
based machines
75
A Note on ISA Evolution

• ISAs have evolved to reflect/satisfy the concerns of the day

• Examples:
• Limited on-chip and off-chip memory size
• Limited compiler optimization technology
• Limited memory bandwidth
• Need for specialization in important applications (e.g., MMX)

• Use of translation (in HW and SW) enabled underlying implementations


to be similar, regardless of the ISA
• Concept of dynamic/static interface: translation/interpretation
• Contrast it with hardware/software interface
76
Effect of Translation

• One can translate from one ISA to another ISA to change the semantic
gap tradeoffs
• ISA (virtual ISA)  Implementation ISA

• Examples
• Intel’s and AMD’s x86 implementations translate x86 instructions into
programmer-invisible microoperations (simple instructions) in hardware
• Transmeta’s x86 implementations translated x86 instructions into “secret” VLIW
instructions in software (code morphing software)

• Think about the tradeoffs 77


Hardware-Based Translation

Klaiber, “The Technology Behind Crusoe Processors,” Transmeta White Paper 2000.
78
Software-Based Translation

Klaiber, “The Technology Behind Crusoe Processors,” Transmeta White Paper 2000.
79
ISA-level Tradeoffs: Instruction
Length

• Fixed length: Length of all instructions the same


+ Easier to decode single instruction in hardware
+ Easier to decode multiple instructions concurrently
-- Wasted bits in instructions (Why is this bad?)
-- Harder-to-extend ISA (how to add new instructions?)

• Variable length: Length of instructions different (determined by opcode and sub-


opcode)
+ Compact encoding (Why is this good?)
Intel 432: Huffman encoding (sort of). 6 to 321 bit instructions. How?
-- More logic to decode a single instruction
-- Harder to decode multiple instructions concurrently

• Tradeoffs
• Code size (memory space, bandwidth, latency) vs. hardware complexity
• ISA extensibility and expressiveness vs. hardware complexity
• Performance? Smaller code vs. ease of decode 80
ISA-level Tradeoffs: Uniform Decode
• Uniform decode: Same bits in each instruction correspond to the same
meaning
• Opcode is always in the same location
• Ditto operand specifiers, immediate values, …
• Many “RISC” ISAs: Alpha, MIPS, SPARC
+ Easier decode, simpler hardware
+ Enables parallelism: generate target address before knowing the instruction is a branch
-- Restricts instruction format (fewer instructions?) or wastes space

• Non-uniform decode
• E.g., opcode can be the 1st-7th byte in x86
+ More compact and powerful instruction format
-- More complex decode logic
81
x86 vs. Alpha Instruction Formats

• x86:

• Alpha:

82
MIPS Instruction Format
• R-type, 3 register operands
0 rs rt rd shamt funct R-type
6-bit 5-bit 5-bit 5-bit 5-bit 6-bit

• I-type, 2 register operands and 16-bit immediate operand


opcode rs rt immediate I-type
6-bit 5-bit 5-bit 16-bit

• J-type, 26-bit immediate operand


opcode immediate J-type
6-bit 26-bit

• Simple Decoding
• 4 bytes per instruction, regardless of format
• must be 4-byte aligned (2 lsb of PC must be 2b’00) 83
ARM

84
A Note on Length and Uniformity

• Uniform decode usually goes with fixed length

• In a variable length ISA, uniform decode can be a property of


instructions of the same length
• It is hard to think of it as a property of instructions of different lengths

85
A Note on RISC vs. CISC

• Usually, …

• RISC
• Simple instructions
• Fixed length
• Uniform decode
• Few addressing modes

• CISC
• Complex instructions
• Variable length
• Non-uniform decode
• Many addressing modes 86
ISA-level Tradeoffs: Number of
Registers
• Affects:
• Number of bits used for encoding register address
• Number of values kept in fast storage (register file)
• (uarch) Size, access time, power consumption of register file

• Large number of registers:


+ Enables better register allocation (and optimizations) by
compiler  fewer saves/restores
-- Larger instruction size
-- Larger register file size

87
ISA-level Tradeoffs: Addressing
Modes
• Addressing mode specifies how to obtain an operand of an instruction
• Register
• Immediate
• Memory (displacement, register indirect, indexed, absolute, memory indirect,
autoincrement, autodecrement, …)

• More modes:
+ help better support programming constructs (arrays, pointer-based accesses)
-- make it harder for the architect to design
-- too many choices for the compiler?
• Many ways to do the same thing complicates compiler design
• Wulf, “Compilers and Computer Architecture,” IEEE Computer 1981

88
x86 vs. Alpha Instruction Formats
• x86:

• Alpha:

89
x86
Register indirect

Memory
absolute

SIB +
displacement

register +
displacement
register Register

90
x86
indexed
(base +
index)

scaled
(base +
index*4)

91
X86 SIB-D Addressing Mode

x86 Manual Vol. 1, page 3-22 -- see course resources on website


Also, see Section 3.7.3 and 3.7.5
92
X86 Manual: Suggested Uses of Addressing
Modes

Static address

Dynamic storage

Arrays

Records

x86 Manual Vol. 1, page 3-22 -- see course resources on website


Also, see Section 3.7.3 and 3.7.5
93
X86 Manual: Suggested Uses of
Addressing Modes

Static arrays w/ fixed-size elements

2D arrays

2D arrays

x86 Manual Vol. 1, page 3-22 -- see course resources on website


Also, see Section 3.7.3 and 3.7.5
94
Other Example ISA-level Tradeoffs

• Condition codes vs. not


• VLIW vs. single instruction
• Precise vs. imprecise exceptions
• Virtual memory vs. not
• Unaligned access vs. not
• Hardware interlocks vs. software-guaranteed interlocking
• Software vs. hardware managed page fault handling
• Cache coherence (hardware vs. software)
• …
95
Back to Programmer vs.
(Micro)architect
• Many ISA features designed to aid programmers
• But, complicate the hardware designer’s job

• Virtual memory
• vs. overlay programming
• Should the programmer be concerned about the size of code
blocks fitting physical memory?
• Addressing modes
• Unaligned memory access
• Compile/programmer needs to align data

96
MIPS: Aligned Access
MSB byte-3 byte-2 byte-1 byte-0 LSB

byte-7 byte-6 byte-5 byte-4

• LW/SW alignment restriction: 4-byte word-alignment


• not designed to fetch memory bytes not within a word boundary
• not designed to rotate unaligned bytes into registers
• Provide separate opcodes for the “infrequent” case
A B C D

LWL rd 6(r0)  byte-6 byte-5 byte-4 D

LWR rd 3(r0)  byte-6 byte-5 byte-4 byte-3

• LWL/LWR is slower
• Note LWL and LWR still fetch within word boundary
97
X86: Unaligned Access
• LD/ST instructions automatically align data that spans a “word”
boundary
• Programmer/compiler does not need to worry about where data
is stored (whether or not in a word-aligned location)

98
X86: Unaligned Access

99
What About ARM?

• https://www.scss.tcd.ie/~waldroj/3d1/arm_arm.pdf
• Section A2.8

100
Historical Performance Gains
• By 1985, it was possible to integrate a complete microprocessor onto
a single die or “chip.”
• As fabrication technology improved, and transistors got smaller, the
performance of a single core improved quickly.
• Performance improved at the rate of 52% per year for nearly 20 years
(measured using SPEC benchmark data).
• Note: the data are for desktop/server processors

101
Historical Performance Gains
Clock period
• Clock frequency improved

Clock Frequency (MHz)


quickly between 1985 and
2002:
• ~10x from faster
transistors, and
• ~10x from pipelining
and circuit-level
advances.
• So overall, ~100X of the
total 800X gains came from
reduced clock periods.
Year
A. Danowitz, K. Kelley, J. Mao, J. P. Stevenson, and M. Horowitz.
Clock Frequency, Stanford CPU DB. Accessed on Nov. 5, 2019.
[Online]. Available:
http://cpudb.stanford.edu/visualize/clock_frequency 102
Historical Performance Gains
• From 1985 to 2002, performance improved by ~800 times.
• Over time, technology scaling provided much greater numbers of faster
and lower power transistors.
• The “iron law” of processor performance:

Time = instructions executed x clocks per instruction (CPI) x


clock period

• Clocks per instruction (CPI)


• We will also refer to Instructions Per Cycle (IPC), i.e., 1/CPI.
103
Clocks Per Instruction (CPI)

• Early machines were limited by transistor count. As a result, they often


required multiple clock cycles to execute each instruction (CPI >> 1).
• As transistor budgets improved, we could aim to get closer to a CPI of 1.
• This is easy if we don’t care at all about clock frequency.
• Designing a high-frequency design with a good CPI is much harder. We
need to keep our high-performance processor busy and avoid it stalling,
which would increase our CPI. This requires many different techniques and
costs transistors (area) and power.

104
Clocks Per Instruction (CPI)
• Eventually, the industry was also able to fetch and execute multiple
instructions per clock cycle. This reduced CPI to below 1.
• When we fetch and execute multiple instructions together, we often
refer to Instructions Per Cycle (IPC), which is 1/CPI.
• For instructions to be executed at the same time, they must be
independent.
• Again, growing transistor budgets were exploited to help find and exploit
this Instruction-Level Parallelism (ILP).

105
What Is Pipelining?

Engine
Chassis Paint
Time Time
A A
B B
C C

Order of manufacturing Order of


(Car A, B, and then C) manufacturing
IPC and Instruction Count

• Of the 800x improvement in performance SpecInt2000 per MHz


(1985-2002), ~100x is from clock frequency 0.0035

improvements. 0.003
• The remaining gains (~8x) were from a
0.0025
reduction in instruction count, better
compiler optimizations, and improvements 0.002

in IPC. 0.0015

0.001
The graph to the right shows these
improvements. It plots performance 0.0005

Year
(SpecInt2000 benchmark performance per 0
1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004
MHz for Intel processors against time).
A Shorter Critical Path

• We can also try to reduce the number of

Critical path length (FO4 delays)


gates on our critical path.
• This can be done by inserting additional
registers to break complex logic into
different “pipeline” stages.
• Advances were also made that
improved circuit-level design
techniques.
• The length of our critical paths reduced
by ~10x (1985-2002).
Year
A. Danowitz, K. Kelley, J. Mao, J. P. Stevenson, and M. Horowitz. Stanford CPU
DB. Accessed on Nov. 5, 2019. [Online]. Available: http://cpudb.stanford.edu
Thank you!

109

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy