0% found this document useful (0 votes)
151 views

DR - Chao Tan, Carnegie Mellon University: Computer Organization Computer Architecture

This document provides an overview of the basic computer organization and design being discussed in the chapter. It describes the key components of the basic computer model including the processor, memory, instruction codes, registers, and addressing modes. The basic computer has a processor and 4096 words of memory. Instructions are 16 bits long and contain opcodes and addresses. Direct addressing specifies the operand address directly while indirect addressing specifies the address of the address.

Uploaded by

Great Guy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
151 views

DR - Chao Tan, Carnegie Mellon University: Computer Organization Computer Architecture

This document provides an overview of the basic computer organization and design being discussed in the chapter. It describes the key components of the basic computer model including the processor, memory, instruction codes, registers, and addressing modes. The basic computer has a processor and 4096 words of memory. Instructions are 16 bits long and contain opcodes and addresses. Direct addressing specifies the operand address directly while indirect addressing specifies the address of the address.

Uploaded by

Great Guy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 221

1

Dr.Chao Tan,
Carnegie Mellon University
Computer Organization Computer Architecture
2

Chap. 1: Digital Logic Circuits

• Logic Gates, • Boolean Algebra


• Map Simplification, • Combinational Circuits
• Filp-Flops, • Sequential Circuits

Chap. 2: Digital Components

• Integrated Circuits, • Decoders, • Multiplexers


• Registers, • Shift Registers, • Binary Counters
• Memory Unit

Chap. 3: Data Representation

• Data Types, • Complements


• Fixed Point Representation
• Floating Point Representation
• Other Binary Codes, • Error Detection Codes

Computer Organization Computer Architecture


3

Chap. 4: Register Transfer and Microoperations

• Register Transfer Language, • Register Transfer


• Bus and Memory Transfers
• Arithmetic Microoperations
• Logic Microoperations, • Shift Microoperations
• Arithmetic Logic Shift Unit

Chap. 5: Basic Computer Organization and Design

• Instruction Codes, • Computer Registers


• Computer Instructions, • Timing and Control
• Instruction Cycle,
• Memory Reference Instructions
• Input-Output and Interrupt
• Complete Computer Description
• Design of Basic Computer
• Organization
Computer Design of Accumulator Logic Computer Architecture
4

Chap. 6: Programming the Basic Computer

• Machine Language, • Assembly Language


• Assembler, • Program Loops
• Programming Arithmetic and Logic Operations
• Subroutines, • Input-Output Programming

Chap. 7: Microprogrammed Control

• Control Memory, • Sequencing Microinstructions


• Microprogram Example, • Design of Control Unit
• Microinstruction Format

Chap. 8: Central Processing Unit

• General Register Organization


• Stack Organization, • Instruction Formats
• Addressing Modes
• Data Transfer and Manipulation
• Program Control
• Reduced Instruction Set Computer
Computer Organization Computer Architecture
5

Chap. 9: Pipeline and Vector Processing

• Parallel Processing, • Pipelining


• Arithmetic Pipeline, • Instruction Pipeline
• RISC Pipeline, • Vector Processing

Chap. 10: Computer Arithmetic

• Arithmetic with Signed-2's Complement Numbers


• Multiplication and Division Algorithms
• Floating-Point Arithmetic Operations
• Decimal Arithmetic Unit
• Decimal Arithmetic Operations

Chap. 11: Input-Output Organization

• Peripheral Devices, • Input-Output Interface


• Asynchronous Data Transfer, • Modes of Transfer
• Priority Interrupt, • Direct Memory Access

Computer Organization Computer Architecture


6

Chap. 12: Memory Organization

• Memory Hierarchy, • Main Memory


• Auxiliary Memory. • Associative Memory
• Cache Memory, • Virtual Memory

Chap. 13: Multiprocessors (Δ)

• Characteristics of Multiprocessors
• Interconnection Structures
• Interprocessor Arbitration
• Interprocessor Communication/Synchronization
• Cache Coherence

Computer Organization Computer Architecture


Basic Computer Organization & Design 7

BASIC COMPUTER ORGANIZATION AND DESIGN


• Instruction Codes

• Computer Registers

• Computer Instructions

• Timing and Control

• Instruction Cycle

• Memory Reference Instructions

• Input-Output and Interrupt

• Complete Computer Description

• Design of Basic Computer

• Design of Accumulator Logic

Computer Organization Computer Architecture


Basic Computer Organization & Design 8

INTRODUCTION
• Every different processor type has its own design (different
registers, buses, microoperations, machine instructions, etc)
• Modern processor is a very complex device
• It contains
– Many registers
– Multiple arithmetic units, for both integer and floating point calculations
– The ability to pipeline several consecutive instructions to speed execution
– Etc.
• However, to understand how processors work, we will start with
a simplified processor model
• This is similar to what real processors were like ~25 years ago
• M. Morris Mano introduces a simple processor model he calls
the Basic Computer
• We will use this to introduce processor organization and the
relationship of the RTL model to the higher level computer
processor

Computer Organization Computer Architecture


Basic Computer Organization & Design 9

THE BASIC COMPUTER

• The Basic Computer has two components, a processor and


memory
• The memory has 4096 words in it
– 4096 = 212, so it takes 12 bits to select a word in memory
• Each word is 16 bits long

CPU RAM
0

15 0

4095

Computer Organization Computer Architecture


Basic Computer Organization & Design 10 Instruction codes

INSTRUCTION
S
• Program
– A sequence of (machine) instructions
• (Machine) Instruction
– A group of bits that tell the computer to perform a specific operation
(a sequence of micro-operation)
• The instructions of a program, along with any needed data
are stored in memory
• The CPU reads the next instruction from memory
• It is placed in an Instruction Register (IR)
• Control circuitry in control unit then translates the
instruction into the sequence of microoperations
necessary to implement it

Computer Organization Computer Architecture


Basic Computer Organization & Design 11 Instruction codes

INSTRUCTION
FORMAT
• A computer instruction is often divided into two parts
– An opcode (Operation Code) that specifies the operation for that
instruction
– An address that specifies the registers and/or locations in memory to
use for that operation
• In the Basic Computer, since the memory contains 4096 (=
212) words, we needs 12 bit to specify which memory
address this instruction will use
• In the Basic Computer, bit 15 of the instruction specifies
the addressing mode (0: direct addressing, 1: indirect
addressing)
• Since the memory words, and hence the instructions, are
16 bits long, that leaves 3 bits for the instruction’s opcode

Instruction Format
15 14 12 11 0
I Opcode Address

Addressing
mode

Computer Organization Computer Architecture


Basic Computer Organization & Design 12 Instruction codes

ADDRESSING
• MODES
The address field of an instruction can represent either
– Direct address: the address in memory of the data to use (the address of the
operand), or
– Indirect address: the address in memory of the address in memory of the data to
use
Direct addressing Indirect addressing
22 0 ADD 457 35 1 ADD 300

300 1350

457 Operand

1350 Operand

+ +
AC AC

• Effective Address (EA)


– The address, that can be directly used without modification to access an
operand for a computation-type instruction, or as the target address for a
branch-type instruction

Computer Organization Computer Architecture


Basic Computer Organization & Design 13 Instruction codes

PROCESSOR
REGISTERS
• A processor has many registers to hold instructions,
addresses, data, etc
• The processor has a register, the Program Counter (PC) that
holds the memory address of the next instruction to get
– Since the memory in the Basic Computer only has 4096 locations, the PC
only needs 12 bits
• In a direct or indirect addressing, the processor needs to keep
track of what locations in memory it is addressing: The
Address Register (AR) is used for this
– The AR is a 12 bit register in the Basic Computer
• When an operand is found, using either direct or indirect
addressing, it is placed in the Data Register (DR). The
processor then uses this value as data for its operation
• The Basic Computer has a single general purpose register –
the Accumulator (AC)

Computer Organization Computer Architecture


Basic Computer Organization & Design 14 Instruction codes

PROCESSOR
REGISTERS
• The significance of a general purpose register is that it can be
referred to in instructions
– e.g. load AC with the contents of a specific memory location; store the
contents of AC into a specified memory location
• Often a processor will need a scratch register to store
intermediate results or other temporary data; in the Basic
Computer this is the Temporary Register (TR)
• The Basic Computer uses a very simple model of input/output
(I/O) operations
– Input devices are considered to send 8 bits of character data to the processor
– The processor can send 8 bits of character data to output devices
• The Input Register (INPR) holds an 8 bit character gotten from an
input device
• The Output Register (OUTR) holds an 8 bit character to be send
to an output device

Computer Organization Computer Architecture


Basic Computer Organization & Design 15 Registers

BASIC COMPUTER
REGISTERS
Registers in the Basic Computer
11 0
PC
Memory
11 0
4096 x 16
AR
15 0
IR CPU
15 0 15 0
TR DR
7 0 7 0 15 0
OUTR INPR AC

List of BC Registers
DR 16 Data Register Holds memory operand
AR 12 Address Register Holds address for memory
AC 16 Accumulator Processor register
IR 16 Instruction Register Holds instruction code
PC 12 Program Counter Holds address of instruction
TR 16 Temporary Register Holds temporary data
INPR 8 Input Register Holds input character
OUTR 8 Output Register Holds output character
Computer Organization Computer Architecture
Basic Computer Organization & Design 16 Registers

COMMON BUS
SYSTEM

• The registers in the Basic Computer are connected using a


bus
• This gives a savings in circuitry over complete connections
between registers

Computer Organization Computer Architecture


Basic Computer Organization & Design 17 Registers

COMMON BUS
SYSTEM S2
S1
S0
Bus

Memory unit 7
4096 x 16
Address
Write Read

AR 1

LD INR CLR

PC 2

LD INR CLR

DR 3

LD INR CLR

E
ALU AC 4

LD INR CLR

INPR
IR 5

LD
TR 6

LD INR CLR

OUTR Clock
LD
16-bit common bus

Computer Organization Computer Architecture


Basic Computer Organization & Design 18 Registers

COMMON BUS
SYSTEM
Read
INPR
Memory Write
4096 x 16

Address E ALU

AC

L I C

L I C L

L I C DR IR L I C

PC TR

AR OUTR LD

L I C

7 1 2 3 4 5 6

16-bit Common Bus


S0 S1 S2

Computer Organization Computer Architecture


Basic Computer Organization & Design 19 Registers

COMMON BUS
SYSTEM
• Three control lines, S2, S1, and S0 control which register the
bus selects as its input
S2 S 1 S 0 Register
0 0 0 x
0 0 1 AR
0 1 0 PC
0 1 1 DR
1 0 0 AC
1 0 1 IR
1 1 0 TR
1 1 1 Memory

• Either one of the registers will have its load signal


activated, or the memory will have its read signal activated
– Will determine where the data from the bus gets loaded
• The 12-bit registers, AR and PC, have 0’s loaded onto the
bus in the high order 4 bit positions
• When the 8-bit register OUTR is loaded from the bus, the
data comes from the low order 8 bits on the bus

Computer Organization Computer Architecture


20

• Example:
• One address instruction
• X=(A+B)-(C+D)
• LOAD A AC<-M[A]
• ADD B AC<-A+M[B]
• STROE T
• LOAD C AC<-M[C]
• ADD D AC<-AC+M[D]
• SUB T MUL T
• STORE T STORE X

Computer Organization Computer Architecture


Basic Computer Organization & Design 21 Instructions

BASIC COMPUTER
INSTRUCTIONS
• Basic Computer Instruction Format

Memory-Reference Instructions (OP-code = 000 ~ 110)


15 14 12 11 0
I Opcode Address

Register-Reference Instructions (OP-code = 111, I = 0)


15 12 11 0
0 1 1 1 Register operation

Input-Output Instructions (OP-code =111, I = 1)


15 12 11 0
1 1 1 1 I/O operation

Computer Organization Computer Architecture


22

Computer Organization Computer Architecture


23

Computer Organization Computer Architecture


Basic Computer Organization & Design 24 Instructions

BASIC COMPUTER
Symbol
Hex Code
I=0 I=1
INSTRUCTIONS
Description
AND 0xxx 8xxx AND memory word to AC
ADD 1xxx 9xxx Add memory word to AC
LDA 2xxx Axxx Load AC from memory
STA 3xxx Bxxx Store content of AC into
memory
BUN 4xxx Cxxx Branch unconditionally
BSA 5xxx Dxxx Branch and save return address
ISZ 6xxx Exxx Increment and skip if zero

CLA 7800 Clear AC


CLE 7400 Clear E
CMA 7200 Complement AC
CME 7100 Complement E
CIR 7080 Circulate right AC and E
CIL 7040 Circulate left AC and E
INC 7020 Increment AC
SPA 7010 Skip next instr. if AC is positive
SNA 7008 Skip next instr. if AC is negative
SZA 7004 Skip next instr. if AC is zero
SZE 7002 Skip next instr. if E is zero
HLT 7001 Halt computer

INP F800 Input character to AC


OUT F400 Output character from AC
SKI F200 Skip on input flag
SKO F100 Skip on output flag
ION F080 Interrupt on
IOF F040 Interrupt off
Computer Organization Computer Architecture
Basic Computer Organization & Design 25 Instructions

INSTRUCTION SET
COMPLETENESS
A computer should have a set of instructions so that the user can
construct machine language programs to evaluate any function
that is known to be computable.

• Instruction Types
Functional Instructions
- Arithmetic, logic, and shift instructions
- ADD, CMA, INC, CIR, CIL, AND, CLA
Transfer Instructions
- Data transfers between the main memory
and the processor registers
- LDA, STA
Control Instructions
- Program sequencing and control
- BUN, BSA, ISZ
Input/Output Instructions
- Input and output
- INP, OUT

Computer Organization Computer Architecture


Basic Computer Organization & Design 26 Instruction codes

CONTROL
UNIT
• Control unit (CU) of a processor translates from machine
instructions to the control signals for the microoperations
that implement them

• Control units are implemented in one of two ways


• Hardwired Control
– CU is made up of sequential and combinational circuits to generate the
control signals
• Microprogrammed Control
– A control memory on the processor contains microprograms that
activate the necessary control signals

• We will consider a hardwired implementation of the control


unit for the Basic Computer

Computer Organization Computer Architecture


Basic Computer Organization & Design 27 Timing and control

TIMING AND
CONTROL
Control unit of Basic Computer

Instruction register (IR)


15 14 13 12 11 - 0 Other inputs

3x8
decoder
7 6543 210
D0
7409626547
I Combinational
D7 Control Control
logic signals

T15

T0

15 14 . . . . 2 1 0
4 x 16
decoder

4-bit Increment (INR)


sequence Clear (CLR)
counter
(SC) Clock

Computer Organization Computer Architecture


28

Computer Organization Computer Architecture


Basic Computer Organization & Design 29 Timing and control

TIMING
- Generated by 4-bit sequenceSIGNALS
counter and 4×16 decoder
- The SC can be incremented or cleared.

- Example: T0, T1, T2, T3, T4, T0, T1, . . .


Assume: At time T4, SC is cleared to 0 if decoder output D3 is active.
D3T4: SC ← 0

Computer Organization Computer Architecture


Basic Computer Organization & Design 30

INSTRUCTION CYCLE

• In Basic Computer, a machine instruction is executed in the


following cycle:
1. Fetch an instruction from memory
2. Decode the instruction
3. Read the effective address from memory if the instruction has an
indirect address
4. Execute the instruction

• After an instruction is executed, the cycle starts again at


step 1, for the next instruction

• Note: Every different processor has its own (different)


instruction cycle

Computer Organization Computer Architecture


Basic Computer Organization & Design 31 Instruction Cycle

FETCH and
DECODE
• Fetch and Decode T0: AR ← PC (S0S1S2=010, T0=1)
T1: IR ← M [AR], PC ← PC + 1 (S0S1S2=111, T1=1)
T2: D0, . . . , D7 ← Decode IR(12-14), AR ← IR(0-11), I ←
IR(15)
T1 S2

T0 S1 Bus
S0

Memory 7
unit
Address
Read

AR 1

LD

PC 2

INR

IR 5

LD
Clock
Common bus

Computer Organization Computer Architecture


Basic Computer Organization & Design 32 Instrction Cycle

DETERMINE THE TYPE OF


INSTRUCTION Start
SC ← 0

T0
AR ← PC

T1
IR ← M[AR], PC ← PC + 1

T2
Decode Opcode in IR(12-14),
AR ← IR(0-11), I ← IR(15)

(Register or I/O) = 1 = 0 (Memory-reference)


D7

(I/O) = 1 = 0 (register) (indirect) = 1 = 0 (direct)


I I

T3 T3 T3 T3
Execute Execute AR ← M[AR] Nothing
input-output register-reference
instruction instruction
SC ← 0 SC ← 0 Execute T4
memory-reference
instruction
SC ← 0

D'7IT3: AR ← M[AR]
D'7I'T3: Nothing
D7I'T3: Execute a register-reference instr.
D7IT3: Execute an input-output instr.
Computer Organization Computer Architecture
Basic Computer Organization & Design 33 Instruction Cycle

REGISTER REFERENCE
INSTRUCTIONS
Register Reference Instructions are identified when
- D7 = 1, I = 0
- Register Ref. Instr. is specified in b0 ~ b11 of IR
- Execution starts with timing signal T3

r = D7 I′T3 => Register Reference Instruction


Bi = IR(i) , i=0,1,2,...,11
r: SC ← 0
CLA rB11: AC ← 0
CLE rB10: E←0
CMA rB9: AC ← AC’
CME rB8: E ← E’
CIR rB7: AC ← shr AC, AC(15) ← E, E ← AC(0)
CIL rB6: AC ← shl AC, AC(0) ← E, E ← AC(15)
INC rB5: AC ← AC + 1
SPA rB4: if (AC(15) = 0) then (PC ← PC+1)
SNA rB3: if (AC(15) = 1) then (PC ← PC+1)
SZA rB2: if (AC = 0) then (PC ← PC+1)
SZErB1: if (E = 0) then (PC ← PC+1)
HLTrB0: S ← 0 (S is a start-stop flip-flop)

Computer Organization Computer Architecture


34

REGISTER REFERENCE
INSTRUCTIONS
• B0,B1,B2,B3……………….B11
• 0111 1000 0000 0000(7800)
• 0111 0100 0000 0000(7400)
• 0111 0010 0000 0000(7200)

Computer Organization Computer Architecture


Basic Computer Organization & Design 35 MR Instructions

MEMORY REFERENCE
Symbol
Operation
INSTRUCTIONS
Symbolic Description
Decoder

AND D0 AC ← AC ∧ M[AR]
ADD D1 AC ← AC + M[AR], E ← Cout
LDA D2 AC ← M[AR]
STA D3 M[AR] ← AC
BUN D4 PC ← AR
BSA D5 M[AR] ← PC, PC ← AR + 1
ISZ D6 M[AR] ← M[AR] + 1, if M[AR] + 1 = 0 then PC ← PC+1

- The effective address of the instruction is in AR and was placed there during
timing signal T2 when I = 0, or during timing signal T3 when I = 1
- Memory cycle is assumed to be short enough to complete in a CPU cycle
- The execution of MR instruction starts with T4
AND to AC
D 0T 4: DR ← M[AR] Read operand
D 0T 5: AC ← AC ∧ DR, SC ← 0 AND with AC
ADD to AC
D 1T 4: DR ← M[AR] Read operand
D 1T 5: AC ← AC + DR, E ← Cout, SC ← 0 Add to AC and store carry in E

Computer Organization Computer Architecture


Basic Computer Organization & Design 36

MEMORY REFERENCE INSTRUCTIONS


LDA: Load to AC
D 2T 4: DR ← M[AR]
D 2T 5: AC ← DR, SC ← 0
STA: Store AC
D 3T 4: M[AR] ← AC, SC ← 0
BUN: Branch Unconditionally
D 4T 4: PC ← AR, SC ← 0
BSA: Branch and Save Return Address
M[AR] ← PC, PC ← AR + 1
Memory, PC, AR at time T4 Memory, PC after execution

20 0 BSA 135 20 0 BSA 135


PC = 21 Next instruction 21 Next instruction

AR = 135 135 21

136 Subroutine PC = 136 Subroutine

1 BUN 135 1 BUN 135

Memory Memory

Computer Organization Computer Architecture


Basic Computer Organization & Design 37 MR Instructions

MEMORY REFERENCE INSTRUCTIONS

BSA:
D 5T 4: M[AR] ← PC, AR ← AR + 1
D 5T 5: PC ← AR, SC ← 0

ISZ: Increment and Skip-if-Zero


D 6T 4: DR ← M[AR]
D 6T 5: DR ← DR + 1
D 6T 4: M[AR] ← DR, if (DR = 0) then (PC ← PC + 1), SC ← 0

Computer Organization Computer Architecture


Basic Computer Organization & Design 38 MR Instructions

FLOWCHART FOR MEMORY REFERENCE


INSTRUCTIONS
Memory-reference instruction

AND ADD LDA STA

D T 4 D 1T 4 D 2T 4 D 3T 4
0
DR ← M[AR] DR ← M[AR] DR ← M[AR] M[AR] ← AC
SC ← 0

D 0T 5 D 1T 5 D 2T 5
AC ← AC ∧ DR AC ← AC + DR AC ← DR
SC ← 0 E ← Cout SC ← 0
SC ← 0

BUN BSA ISZ

D 4T 4 D 5T 4 D 6T 4

PC ← AR M[AR] ← PC DR ← M[AR]
SC ← 0 AR ← AR + 1

D 5T 5 D 6T 5

PC ← AR DR ← DR + 1
SC ← 0

D 6T 6
M[AR] ← DR
If (DR = 0)
then (PC ← PC + 1)
SC ← 0

Computer Organization Computer Architecture


Basic Computer Organization & Design 39 I/O and Interrupt

INPUT-OUTPUT AND INTERRUPT


A Terminal with a keyboard and a Printer
• Input-Output Configuration
Serial Computer
Input-output communication
terminal registers and
interface flip-flops
Receiver
Printer interface OUTR FGO

AC

Transmitter
Keyboard interface INPR FGI
INPR Input register - 8 bits
OUTR Output register - 8 bits Serial Communications Path
FGI Input flag - 1 bit Parallel Communications Path
FGO Output flag - 1 bit
IEN Interrupt enable - 1 bit

- The terminal sends and receives serial information


- The serial info. from the keyboard is shifted into INPR
- The serial info. for the printer is stored in the OUTR
- INPR and OUTR communicate with the terminal
serially and with the AC in parallel.
- The flags are needed to synchronize the timing
difference between I/O device and the computer
Computer Organization Computer Architecture
Basic Computer Organization & Design 40 I/O and Interrupt

PROGRAM CONTROLLED DATA


-- CPU -- TRANSFER
-- I/O Device
--
/* Input */ /* Initially FGI = 0 */ loop: If FGI = 1 goto loop
loop: If FGI = 0 goto loop
INPR ← new data, FGI ←
AC ← INPR, FGI ← 0
1
/* Output */ /* Initially FGO = 1 */
loop: If FGO = 0 goto loop loop: If FGO = 1 goto loop
OUTR ← AC, FGO ← 0 consume OUTR, FGO ← 1
FGI=0 FGO=1
Start Input Start Output

FGI ← 0
AC ← Data
yes yes
FGI=0
FGO=0
no
no
AC ← INPR
OUTR ← AC

yes More FGO ← 0


Character
yes More
no Character
END no
END
Computer Organization Computer Architecture
Basic Computer Organization & Design 41

INPUT-OUTPUT INSTRUCTIONS

D7IT3 = p
IR(i) = Bi, i = 6, …, 11

p: SC ← 0 Clear SC
INP pB11: AC(0-7) ← INPR, FGI ← 0 Input char. to AC
OUT pB10: OUTR ← AC(0-7), FGO ← 0 Output char. from
AC
SKI pB9: if(FGI = 1) then (PC ← PC + 1) Skip on input flag
SKO pB8: if(FGO = 1) then (PC ← PC + 1) Skip on output flag
ION pB7: IEN ← 1 Interrupt enable on
IOF pB6: IEN ← 0 Interrupt enable off

Computer Organization Computer Architecture


Basic Computer Organization & Design 42 I/O and Interrupt

PROGRAM-CONTROLLED
INPUT/OUTPUT
• Program-controlled I/O
- Continuous CPU involvement
I/O takes valuable CPU time
- CPU slowed down to I/O speed
- Simple
- Least hardware

Input

LOOP, SKI DEV


BUN LOOP
INP DEV

Output
LOOP, LDA DATA
LOP, SKO DEV
BUN LOP
OUT DEV

Computer Organization Computer Architecture


Basic Computer Organization & Design 43

INTERRUPT INITIATED INPUT/OUTPUT


- Open communication only when some data has to be passed --> interrupt.

- The I/O interface, instead of the CPU, monitors the I/O device.

- When the interface founds that the I/O device is ready for data transfer,
it generates an interrupt request to the CPU

- Upon detecting an interrupt, the CPU stops momentarily the task


it is doing, branches to the service routine to process the data
transfer, and then returns to the task it was performing.

* IEN (Interrupt-enable flip-flop)


- can be set and cleared by instructions
- when cleared, the computer cannot be interrupted

Computer Organization Computer Architecture


Basic Computer Organization & Design 44 I/O and Interrupt

FLOWCHART FOR INTERRUPT


CYCLE R = Interrupt f/f
Instruction cycle =0 =1 Interrupt cycle
R

Fetch and decode Store return address


instructions in location 0
M[0] ← PC

Execute =0
IEN
instructions
=1 Branch to location 1
PC ← 1
=1
FGI
=0
=1 IEN ← 0
FGO R←0
=0
R←1

- The interrupt cycle is a HW implementation of a branch


and save return address operation.
- At the beginning of the next instruction cycle, the
instruction that is read from memory is in address 1.
- At memory address 1, the programmer must store a branch instruction
that sends the control to an interrupt service routine
- The instruction that returns the control to the original
program is "indirect BUN 0"
Computer Organization Computer Architecture
Basic Computer Organization & Design 45 I/O and Interrupt

REGISTER TRANSFER OPERATIONS IN INTERRUPT CYCLE


Memory
Before interrupt After interrupt cycle

0 0 256
1 0 BUN 1120 PC = 1 0 BUN 1120

Main Main
255 Program 255 Program
PC = 256 256
1120 1120
I/O I/O
Program Program

1 BUN 0 1 BUN 0

Register Transfer Statements for Interrupt Cycle


- R F/F ← 1 if IEN (FGI + FGO)T0′T1′T2′
⇔ T0′T1′T2′ (IEN)(FGI + FGO): R ← 1

- The fetch and decode phases of the instruction cycle


must be modified Replace T0, T1, T2 with R'T0, R'T1, R'T2
- The interrupt cycle :
RT0: AR ← 0, TR ← PC
RT1: M[AR] ← TR, PC ← 0
RT2: PC ← PC + 1, IEN ← 0, R ← 0, SC ← 0

Computer Organization Computer Architecture


Basic Computer Organization & Design 46 I/O and Interrupt

FURTHER QUESTIONS ON INTERRUPT

How can the CPU recognize the device


requesting an interrupt ?

Since different devices are likely to require


different interrupt service routines, how can
the CPU obtain the starting address of the
appropriate routine in each case ?

Should any device be allowed to interrupt the


CPU while another interrupt is being serviced ?

How can the situation be handled when two or


more interrupt requests occur simultaneously ?

Computer Organization Computer Architecture


Basic Computer Organization & Design 47 Description
COMPLETE COMPUTER
DESCRIPTION
Flowchart
SC ← 0, IEN ← 0, R ←of
start
0 Operations
=0(Instruction =1(Interrupt
R
Cycle) Cycle)
R’T0 RT0
AR ← PC AR ← 0, TR ← PC
R’T1 RT1
IR ← M[AR], PC ← PC + 1 M[AR] ← TR, PC ← 0
R’T2 RT2
AR ← IR(0~11), I ← IR(15) PC ← PC + 1, IEN ← 0
D0...D7 ← Decode IR(12 ~ 14) R ← 0, SC ← 0

=1(Register or I/O) D7 =0(Memory Ref)

=1 (I/O) =0 (Register) =1(Indir)


=0(Dir) I I

D7IT3 D7’IT3
DExecute
7
I’T3 Execute DAR
7
’I’T3
<- M[AR] Idle
I/O RR
Instruction Instruction
Execute MR D7’T4
Instruction

Computer Organization Computer Architecture


Basic Computer Organization & Design 48 Description
COMPLETE COMPUTER DESCRIPTION
Microoperations
Fetch R′T0: AR ← PC
R′T1: IR ← M[AR], PC ← PC + 1
Decode R′T2: D0, ..., D7 ← Decode IR(12 ~ 14),
AR ← IR(0 ~ 11), I ← IR(15)
Indirect D7′IT3: AR ← M[AR]
Interrupt
T0′T1′T2′(IEN)(FGI + FGO): R ←1
RT0: AR ← 0, TR ← PC
RT1: M[AR] ← TR, PC ← 0
RT2: PC ← PC + 1, IEN ← 0, R ← 0, SC ← 0
Memory-Reference
AND D0T4: DR ← M[AR]
D0T5: AC ← AC ∧ DR, SC ← 0
ADD D1T4: DR ← M[AR]
D1T5: AC ← AC + DR, E ← Cout, SC ← 0
LDA D2T4: DR ← M[AR]
D2T5: AC ← DR, SC ← 0
STA D3T4: M[AR] ← AC, SC ← 0
BUN D4T4: PC ← AR, SC ← 0
BSA D5T4: M[AR] ← PC, AR ← AR + 1
D5T5: PC ← AR, SC ← 0
ISZ D6T4: DR ← M[AR]
D6T5: DR ← DR + 1
D6T6: M[AR] ← DR, if(DR=0) then (PC ← PC + 1),
SC ← 0

Computer Organization Computer Architecture


Basic Computer Organization & Design 49 Description
COMPLETE COMPUTER DESCRIPTION
Microoperations

Register-Reference
D7I′T3 = r (Common to all register-reference instr)
IR(i) = Bi (i = 0,1,2, ..., 11)
r: SC ← 0
CLA rB11: AC ← 0
CLE rB10: E←0
CMA rB9: AC ← AC′
CME rB8: E ← E′
CIR rB7: AC ← shr AC, AC(15) ← E, E ← AC(0)
CIL rB6: AC ← shl AC, AC(0) ← E, E ← AC(15)
INC rB5: AC ← AC + 1
SPA rB4: If(AC(15) =0) then (PC ← PC + 1)
SNA rB3: If(AC(15) =1) then (PC ← PC + 1)
SZA rB2: If(AC = 0) then (PC ← PC + 1)
SZE rB1: If(E=0) then (PC ← PC + 1)
HLT rB0: S←0

Input-Output D7IT3 = p (Common to all input-output


IR(i) = Bi instructions)
p: (i = 6,7,8,9,10,11)
INP pB11: SC ← 0
OUT pB10: AC(0-7) ← INPR, FGI ← 0
SKI pB9: OUTR ← AC(0-7), FGO ← 0
SKO pB8: If(FGI=1) then (PC ← PC + 1)
ION pB7: If(FGO=1) then (PC ← PC + 1)
IOF pB6: IEN ← 1
IEN ← 0

Computer Organization Computer Architecture


Basic Computer Organization & Design 50 Design of Basic Computer

DESIGN OF BASIC
Hardware Components ofCOMPUTER(BC)
BC
A memory unit: 4096 x 16.
Registers:
AR, PC, DR, AC, IR, TR, OUTR, INPR, and SC
Flip-Flops(Status):
I, S, E, R, IEN, FGI, and FGO
Decoders: a 3x8 Opcode decoder
a 4x16 timing decoder
Common bus: 16 bits
Control logic gates:
Adder and Logic circuit: Connected to AC

Control Logic Gates


- Input Controls of the nine registers
- Read and Write Controls of memory
- Set, Clear, or Complement Controls of the flip-flops
- S2, S1, S0 Controls to select a register for the bus
- AC, and Adder and Logic circuit

Computer Organization Computer Architecture


Basic Computer Organization & Design 51 Design of Basic Computer

CONTROL OF REGISTERS AND


Address Register; AR MEMORY
Scan all of the register transfer statements that change the content of AR:
R’T0: AR ← PC LD(AR)
R’T2: AR ← IR(0-11) LD(AR)
D’7IT3: AR ← M[AR] LD(AR)
RT0: AR ← 0 CLR(AR)
D5T4: AR ← AR + 1 INR(AR)

LD(AR) = R'T0 + R'T2 + D'7IT3


CLR(AR) = RT0
INR(AR) = D5T4

12 12
From bus AR To bus
D'
7
I
LD Clock
T3
T2 INR
CLR
R
T0
D
T4

Computer Organization Computer Architecture


Basic Computer Organization & Design 52 Design of Basic Computer

CONTROL OF
IEN: Interrupt Enable Flag
FLAGS
pB7: IEN ← 1 (I/O Instruction)
pB6: IEN ← 0 (I/O Instruction)
RT2: IEN ← 0 (Interrupt)

p = D7IT3 (Input/Output Instruction)

D
7
p
I
J Q IEN
B
7
T
3

B6
K

R
T2

Computer Organization Computer Architecture


Basic Computer Organization & Design 53 Design of Basic Computer

CONTROL OF COMMON BUS


x1
x2 S
2
Multiplexer
x3
Encoder S bus select
x4 1
x5 inputs
x6 S
0
x7

selected
x1 x2 x3 x4 x5 x6 x7 S2 S1 S0 register
0 0 0 0 0 0 0 0 0 0 none
1 0 0 0 0 0 0 0 0 1 AR
0 1 0 0 0 0 0 0 1 0 PC
0 0 1 0 0 0 0 0 1 1 DR
0 0 0 1 0 0 0 1 0 0 AC
0 0 0 0 1 0 0 1 0 1 IR
0 0 0 0 0 1 0 1 1 0 TR
0 0 0 0 0 0 1 1 1 1 Memory

For AR D4T4: PC ← AR
D5T5: PC ← AR

x1 = D4T4 +
D5T5
Computer Organization Computer Architecture
Basic Computer Organization & Design 54 Design of AC Logic

DESIGN OF ACCUMULATOR
Circuits associated with AC
LOGIC
16
Adder and
16 16 16
From DR logic AC
circuit To bus
8
From INPR

LD INR CLR Clock

Control
gates

All the statements that change the content of AC


D0T5: AC ← AC ∧ DR AND with DR
D1T5: AC ← AC + DR Add with DR
D2T5: AC ← DR Transfer from DR
pB11: AC(0-7) ← INPR Transfer from
INPR
rB9: AC ← AC′ Complement
rB7 : AC ← shr AC, AC(15) ← E Shift right
rB6 : AC ← shl AC, AC(0) ← E Shift left
rB11 : AC ← 0 Clear
rB5 : AC ← AC + 1 Increment
Computer Organization Computer Architecture
Basic Computer Organization & Design 55 Design of AC Logic

CONTROL OF AC
REGISTER
Gate structures for controlling
the LD, INR, and CLR of AC

From Adder 16 16 To bus


and Logic AC
D0 AND LD Clock
T5 INR
D1 ADD CLR

D2 DR
T5
p INPR
B11
r COM
B9
SHR
B7
SHL
B6
INC
B5
CLR
B11
Computer Organization Computer Architecture
Basic Computer Organization & Design 56 Design of AC Logic

ALU (ADDER AND LOGIC


CIRCUIT)

One stage of Adder and Logic circuit


DR(i)
AC(i)

AND

C LD
i ADD
FA I J Q
i
AC(i)
DR
C
i+1
K
INPR
From
INPR
bit(i)
COM

SHR

AC(i+1)
SHL

AC(i-1)

Computer Organization Computer Architecture


Programming the Basic Computer 57

PROGRAMMING THE BASIC COMPUTER

Introduction

Machine Language

Assembly Language

Assembler

Program Loops

Programming Arithmetic and Logic Operations

Subroutines

Input-Output Programming

Computer Organization Computer Architecture


Programming the Basic Computer 58 Introduction

INTRODUCTION
Those concerned with computer architecture should
have a knowledge of both hardware and software
because the two branches influence each other.

Instruction Set of the Basic Computer


Symbol Hexa code Description
AND0 or 8 AND M to AC m: effective address
ADD1 or 9 Add M to AC, carry to E M: memory word (operand)
LDA 2 or A Load AC from M found at m
STA 3 or B Store AC in M
BUN4 or C Branch unconditionally to m
BSA 5 or D Save return address in m and branch to m+1
ISZ 6 or E Increment M and skip if zero
CLA 7800 Clear AC
CLE 7400 Clear E
CMA 7200 Complement AC
CME7100 Complement E
CIR 7080 Circulate right E and AC
CIL 7040 Circulate left E and AC
INC 7020 Increment AC, carry to E
SPA 7010 Skip if AC is positive
SNA 7008 Skip if AC is negative
SZA 7004 Skip if AC is zero
SZE 7002 Skip if E is zero
HLT 7001 Halt computer
INP F800 Input information and clear flag
OUT F400 Output information and clear flag
SKI F200 Skip if input flag is on
SKOF100 Skip if output flag is on
ION F080 Turn interrupt on
IOF F040 Turn interrupt off
Computer Organization Computer Architecture
Programming the Basic Computer 59 Machine Language

MACHINE LANGUAGE
• Program
A list of instructions or statements for directing
the computer to perform a required data
processing task

• Various types of programming languages


- Hierarchy of programming languages

• Machine-language
- Binary code
- Octal or hexadecimal code

• Assembly-language (Assembler)
- Symbolic code

• High-level language (Compiler)

Computer Organization Computer Architecture


Programming the Basic Computer 60 Machine Language

COMPARISON OF PROGRAMMING LANGUAGES


• Binary Program to Add Two Numbers • Hexa program
Location Instruction
Location Instruction Code
000 2004
0 0010 0000 0000 0100 001 1005
1 0001 0000 0000 0101 002 3006
10 0011 0000 0000 0110 003 7001
11 0111 0000 0000 0001 004 0053
100 0000 0000 0101 0011 005 FFE9
101 1111 1111 1110 1001 006 0000
110 0000 0000 0000 0000

• Program with Symbolic OP-Code • Assembly-Language Program


Location Instruction Comments ORG 0 /Origin of program is location 0
000 LDA 004 Load 1st operand into AC LDA A /Load operand from location A
001 ADD 005 Add 2nd operand to AC ADD B /Add operand from location B
002 STA 006 Store sum in location 006 STA C /Store sum in location C
003 HLT Halt computer HLT /Halt computer
004 0053 1st operand A, DEC 83 /Decimal operand
005 FFE9 2nd operand (negative) B, DEC -23 /Decimal operand
006 0000 Store sum here C, DEC 0 /Sum stored in location C
END /End of symbolic program

• Fortran Program
INTEGER A, B, C
DATA A,83 / B,-23
C=A+B
END

Computer Organization Computer Architecture


Programming the Basic Computer 61 Assembly Language

ASSEMBLY LANGUAGE
Syntax of the BC assembly language
Each line is arranged in three columns called fields
Label field
- May be empty or may specify a symbolic
address consists of up to 3 characters
- Terminated by a comma
Instruction field
- Specifies a machine or a pseudo instruction
- May specify one of
* Memory reference instr. (MRI)
MRI consists of two or three symbols separated by spaces.
ADD OPR (direct address MRI)
ADD PTR I (indirect address MRI)
* Register reference or input-output instr.
Non-MRI does not have an address part
* Pseudo instr. with or without an operand
Symbolic address used in the instruction field must be
defined somewhere as a label
Comment field
- May be empty or may include a comment

Computer Organization Computer Architecture


Programming the Basic Computer 62 Assembly Language

PSEUDO-INSTRUCTIONS
ORG N
Hexadecimal number N is the memory loc.
for the instruction or operand listed in the following line
END
Denotes the end of symbolic program
DEC N
Signed decimal number N to be converted to the binary
HEX N
Hexadecimal number N to be converted to the binary

Example: Assembly language program to subtract two numbers

ORG 100 / Origin of program is location 100


LDA SUB / Load subtrahend to AC
CMA / Complement AC
INC / Increment AC
ADD MIN / Add minuend to AC
STA DIF / Store difference
HLT / Halt computer
MIN, DEC 83 / Minuend
SUB, DEC -23 / Subtrahend
DIF, HEX 0 / Difference stored here
END / End of symbolic program

Computer Organization Computer Architecture


Programming the Basic Computer 63 Assembly Language

TRANSLATION TO BINARY

Hexadecimal Code
Location Content Symbolic Program

ORG 100
100 2107 LDA SUB
101 7200 CMA
102 7020 INC
103 1106 ADD MIN
104 3108 STA DIF
105 7001 HLT
106 0053 MIN, DEC 83
107 FFE9 SUB, DEC -23
108 0000 DIF, HEX 0
END

Computer Organization Computer Architecture


Programming the Basic Computer 64 Assembler

ASSEMBLER - FIRST PASS -


Assembler
Source Program - Symbolic Assembly Language Program
Object Program - Binary Machine Language Program
Two pass assembler
1st pass: generates a table that correlates all user defined
(address) symbols with their binary equivalent
value
2nd pass: binary translation

First pass
First pass

LC := 0

Scan next line of code Set LC


yes
no no
Label ORG

yes
yes
Store symbol END
in address-
symbol table
together with no Go to
value of LC second
pass
Increment LC

Computer Organization Computer Architecture


Programming the Basic Computer 65 Assembler

ASSEMBLER - SECOND PASS -


Second Pass

Machine instructions are translated by means of table-lookup procedures;


(1. Pseudo-Instruction Table, 2. MRI Table, 3. Non-MRI Table
4. Address Symbol Table)

Second pass

LC <- 0
Done
Scan next line of code
Set LC

yes yes
Pseudo yes no
ORG END
instr.

no no
DEC or
yes no HEX
MRI Convert
operand
Get operation code to binary
and set bits 2-4 Valid no
non-MRI and store
instr. in location
Search address- given by LC
symbol table for yes
binary equivalent
of symbol address
and set bits 5-16
Store binary Error in
equivalent of line of
yes no instruction code
I in location
given by LC
Set Set
first first
bit to 1 bit to 0

Assemble all parts of


binary instruction and Increment LC
store in location given by LC

Computer Organization Computer Architecture


Programming the Basic Computer 66 Program Loops

PROGRAM LOOPS
Loop: A sequence of instructions that are executed many times,
each with a different set of data
Fortran program to add 100 numbers:

DIMENSION A(100)
INTEGER SUM, A
SUM = 0
DO 3 J = 1, 100
3 SUM = SUM + A(J)
Assembly-language program to add 100 numbers:

ORG 100 / Origin of program is HEX 100


LDA ADS / Load first address of operand
STA PTR / Store in pointer
LDA NBR / Load -100
STA CTR / Store in counter
CLA / Clear AC
LOP, ADD PTR I / Add an operand to AC
ISZ PTR / Increment pointer
ISZ CTR / Increment counter
BUN LOP / Repeat loop again
STA SUM / Store sum
HLT / Halt
ADS, HEX 150 / First address of operands
PTR, HEX 0 / Reserved for a pointer
NBR, DEC -100 / Initial value for a counter
CTR, HEX 0 / Reserved for a counter
SUM, HEX 0 / Sum is stored here
ORG 150 / Origin of operands is HEX 150
DEC. 75 / First operand
.
.
DEC 23 / Last operand
END / End of symbolic program
Computer Organization Computer Architecture
Programming the Basic Computer 67 Programming Arithmetic and Logic Operations

PROGRAMMING ARITHMETIC AND LOGIC OPERATIONS

Implementation of Arithmetic and Logic Operations


- Software Implementation
- Implementation of an operation with a program
using machine instruction set
- Usually when the operation is not included
in the instruction set

- Hardware Implementation
- Implementation of an operation in a computer
with one machine instruction

Software Implementation example:

* Multiplication
- For simplicity, unsigned positive numbers
- 8-bit numbers -> 16-bit product

Computer Organization Computer Architecture


Programming the Basic Computer 68 Programming Arithmetic and Logic Operations

FLOWCHART OF A PROGRAM - Multiplication -


CTR ← - 8
P←0
X holds the multiplicand
Y holds the multiplier
E←0 P holds the product

Example with four significant digits


AC ← Y

X = 0000 1111 P
cir EAC
Y = 0000 1011 0000 0000
0000 1111 0000 1111
Y ← AC 0001 1110 0010 1101
0000 0000 0010 1101
=0 =1 0111 1000 1010 0101
E 1010 0101
P←P+X

E←0

AC ← X

cil EAC
cil

X ← AC

CTR ← CTR + 1

≠0 =0
CTR Stop

Computer Organization Computer Architecture


Programming the Basic Computer 69 Programming Arithmetic and Logic Operations

ASSEMBLY LANGUAGE PROGRAM - Multiplication -

ORG 100
LOP, CLE / Clear E
LDA Y / Load multiplier
CIR / Transfer multiplier bit to E
STA Y / Store shifted multiplier
SZE / Check if bit is zero
BUN ONE / Bit is one; goto ONE
BUN ZRO / Bit is zero; goto ZRO
ONE, LDA X / Load multiplicand
ADD P / Add to partial product
STA P / Store partial product
CLE / Clear E
ZRO, LDA X / Load multiplicand
CIL / Shift left
STA X / Store shifted multiplicand
ISZ CTR / Increment counter
BUN LOP / Counter not zero; repeat loop
HLT / Counter is zero; halt
CTR, DEC -8 / This location serves as a counter
X, HEX 000F / Multiplicand stored here
Y, HEX 000B / Multiplier stored here
P, HEX 0 / Product formed here
END

Computer Organization Computer Architecture


Programming the Basic Computer 70 Programming Arithmetic and Logic Operations
ASSEMBLY LANGUAGE PROGRAM
- Double Precision Addition -

LDA AL / Load A low


ADD BL / Add B low, carry in E
STA CL / Store in C low
CLA / Clear AC
CIL / Circulate to bring carry into AC(16)
ADD AH / Add A high and carry
ADD BH / Add B high
STA CH / Store in C high
HLT

Computer Organization Computer Architecture


Programming the Basic Computer 71 Programming Arithmetic and Logic Operations
ASSEMBLY LANGUAGE PROGRAM
- Logic and Shift Operations -
• Logic operations

- BC instructions : AND, CMA, CLA


- Program for OR operation

LDA A / Load 1st operand


CMA / Complement to get A’
STA TMP / Store in a temporary location
LDA B / Load 2nd operand B
CMA / Complement to get B’
AND TMP / AND with A’ to get A’ AND B’
CMA / Complement again to get A OR B

• Shift operations - BC has Circular Shift only

- Logical shift-right operation - Logical shift-left operation


CLE CLE
CIR CIL

- Arithmetic right-shift operation

CLE / Clear E to 0
SPA / Skip if AC is positive
CME / AC is negative
CIR / Circulate E and AC

Computer Organization Computer Architecture


Programming the Basic Computer 72 Subroutines

SUBROUTINES
Subroutine

- A set of common instructions that can be used in a program many times.


- Subroutine linkage : a procedure for branching
to a subroutine and returning to the main program

Example

Loc. ORG 100 / Main program


100 LDA X / Load X
101 BSA SH4 / Branch to subroutine
102 STA X / Store shifted number
103 LDA Y / Load Y
104 BSA SH4 / Branch to subroutine again
105 STA Y / Store shifted number
106 HLT
107 X, HEX 1234
108 Y, HEX 4321
/ Subroutine to shift left 4 times
109 SH4, HEX 0 / Store return address here
10A CIL / Circulate left once
10B CIL
10C CIL
10D CIL / Circulate left fourth time
10E AND MSK / Set AC(13-16) to zero
10F BUN SH4 I / Return to main program
110 MSK, HEX FFF0 / Mask operand
END

Computer Organization Computer Architecture


Programming the Basic Computer 73 Subroutines

SUBROUTINE PARAMETERS AND DATA LINKAGE


Linkage of Parameters and Data between the Main Program and a Subroutine
- via Registers
- via Memory locations
- ….

Example: Subroutine performing LOGICAL OR operation; Need two parameters


Loc. ORG 200
200 LDA X / Load 1st operand into AC
201 BSA OR / Branch to subroutine OR
202 HEX 3AF6 / 2nd operand stored here
203 STA Y / Subroutine returns here
204 HLT
205 X, HEX 7B95 / 1st operand stored here
206 Y, HEX 0 / Result stored here
207 OR, HEX 0 / Subroutine OR
208 CMA / Complement 1st operand
209 STA TMP / Store in temporary location
20A LDA OR I / Load 2nd operand
20B CMA / Complement 2nd operand
20C AND TMP / AND complemented 1st operand
20D CMA / Complement again to get OR
20E ISZ OR / Increment return address
20F BUN OR I / Return to main program
210 TMP, HEX 0 / Temporary storage
END
Computer Organization Computer Architecture
Programming the Basic Computer 74 Subroutines

SUBROUTINE - Moving a Block of Data -


/ Main program
BSA MVE / Branch to subroutine
HEX 100 / 1st address of source data
HEX 200 / 1st address of destination data
DEC -16 / Number of items to move
HLT
MVE, HEX 0 / Subroutine MVE
LDA MVE I / Bring address of source
STA PT1 / Store in 1st pointer
ISZ MVE / Increment return address
LDA MVE I / Bring address of destination
STA PT2 / Store in 2nd pointer
ISZ MVE / Increment return address
LDA MVE I / Bring number of items
STA CTR / Store in counter
ISZ MVE / Increment return address
LOP, LDA PT1 I / Load source item
STA PT2 I / Store in destination • Fortran subroutine
ISZ PT1 / Increment source pointer
ISZ PT2 / Increment destination pointer SUBROUTINE MVE (SOURCE, DEST, N)
ISZ CTR / Increment counter DIMENSION SOURCE(N), DEST(N)
BUN LOP / Repeat 16 times DO 20 I = 1, N
BUN MVE I / Return to main program 20 DEST(I) = SOURCE(I)
PT1, -- RETURN
PT2, -- END
CTR, --

Computer Organization Computer Architecture


Programming the Basic Computer 75 Input Output Program

INPUT OUTPUT PROGRAM


Program to Input one Character(Byte)

CIF, SKI / Check input flag


BUN CIF / Flag=0, branch to check again
INP / Flag=1, input character
OUT / Display to ensure correctness
STA CHR / Store character
HLT
CHR, -- / Store character here

Program to Output a Character

LDA CHR / Load character into AC


COF, SKO / Check output flag
BUN COF / Flag=0, branch to check again
OUT / Flag=1, output character
HLT
CHR, HEX 0057 / Character is "W"

Computer Organization Computer Architecture


Programming the Basic Computer 76 Input Output Program

CHARACTER MANIPULATION

Subroutine to Input 2 Characters and pack into a word

IN2, -- / Subroutine entry


FST, SKI
BUN FST
INP / Input 1st character
OUT
BSA SH4 / Logical Shift left 4 bits
BSA SH4 / 4 more bits
SCD, SKI
BUN SCD
INP / Input 2nd character
OUT
BUN IN2 I / Return

Computer Organization Computer Architecture


Programming the Basic Computer 77 Input Output Program

PROGRAM INTERRUPT
Tasks of Interrupt Service Routine
- Save the Status of CPU
Contents of processor registers and Flags

- Identify the source of Interrupt


Check which flag is set

- Service the device whose flag is set


(Input Output Subroutine)

- Restore contents of processor registers and flags

- Turn the interrupt facility on

- Return to the running program


Load PC of the interrupted program

Computer Organization Computer Architecture


Programming the Basic Computer 78 Input Output Program

INTERRUPT SERVICE ROUTINE


Loc.
0 ZRO, - / Return address stored here
1 BUN SRV / Branch to service routine
100 CLA / Portion of running program
101 ION / Turn on interrupt facility
102 LDA X
103 ADD Y / Interrupt occurs here
104 STA Z / Program returns here after interrupt
/ Interrupt service routine
200 SRV, STA SAC / Store content of AC
CIR / Move E into AC(1)
STA SE / Store content of E
SKI / Check input flag
BUN NXT / Flag is off, check next flag
INP / Flag is on, input character
OUT / Print character
STA PT1 I / Store it in input buffer
ISZ PT1 / Increment input pointer
NXT, SKO / Check output flag
BUN EXT / Flag is off, exit
LDA PT2 I / Load character from output buffer
OUT / Output character
ISZ PT2 / Increment output pointer
EXT, LDA SE / Restore value of AC(1)
CIL / Shift it to E
LDA SAC / Restore content of AC
ION / Turn interrupt on
BUN ZRO I / Return to running program
SAC, - / AC is stored here
SE, - / E is stored here
PT1, - / Pointer of input buffer
PT2, - / Pointer of output buffer

Computer Organization Computer Architecture


Microprogrammed Control 79

MICROPROGRAMMED
CONTROL
• Control Memory

• Sequencing Microinstructions

• Microprogram Example

• Design of Control Unit

• Microinstruction Format

• Nanostorage and Nanoprogram

Computer Organization Computer Architecture


Microprogrammed Control 80 Implementation of Control Unit

COMPARISON OF CONTROL UNIT IMPLEMENTATIONS


Control Unit Implementation
Combinational Logic Circuits (Hard-wired)
Control Data
Memory IR Status F/Fs

Control Unit's State


Timing State
Combinational Control CPU
Logic Circuits Points
Ins. Cycle State

Microprogram
M Control Data
e
m
o IR Status F/Fs
r
y

C Control C
Next Address Storage C
S S D P CPU
Generation A (μ-program D
Logic s
R memory) R }

Computer Organization Computer Architecture


Microprogrammed Control 81

TERMINOLOGY
Microprogram
- Program stored in memory that generates all the control signals required
to execute the instruction set correctly
- Consists of microinstructions

Microinstruction
- Contains a control word and a sequencing word
Control Word - All the control information required for one clock cycle
Sequencing Word - Information needed to decide
the next microinstruction address
- Vocabulary to write a microprogram

Control Memory(Control Storage: CS)


- Storage in the microprogrammed control unit to store the microprogram

Writeable Control Memory(Writeable Control Storage:WCS)


- CS whose contents can be modified
-> Allows the microprogram can be changed
-> Instruction set can be changed or modified

Dynamic Microprogramming
- Computer system whose control unit is implemented with
a microprogram in WCS
- Microprogram can be changed by a systems programmer or a user
Computer Organization Computer Architecture
Microprogrammed Control 82

TERMINOLOGY

Sequencer (Microprogram Sequencer)


A Microprogram Control Unit that determines
the Microinstruction Address to be executed
in the next clock cycle

- In-line Sequencing
- Branch
- Conditional Branch
- Subroutine
- Loop
- Instruction OP-code mapping

Computer Organization Computer Architecture


Microprogrammed Control 83 Sequencing

MICROINSTRUCTION SEQUENCING
Instruction code

Mapping
logic

Status Branch MUX Multiplexers


bits logic select
Subroutine
register
Control address register (SBR)
(CAR)

Incrementer

Control memory (ROM)

select a status
bit
Microoperations
Branch address

Sequencing Capabilities Required in a Control Storage


- Incrementing of the control address register
- Unconditional and conditional branches
- A mapping process from the bits of the machine
instruction to an address for control memory
- A facility for subroutine call and return
Computer Organization Computer Architecture
Microprogrammed Control 84 Sequencing

CONDITIONAL BRANCH
Load address
Control address register

Increment

MUX
Control memory

...
Status bits
(condition)

Condition select Micro-operations

Next address

Conditional Branch
If Condition is true, then Branch (address from
the next address field of the current microinstruction)
else Fall Through
Conditions to Test: O(overflow), N(negative),
Z(zero), C(carry), etc.

Unconditional Branch
Fixing the value of one status bit at the input of the multiplexer to 1
Computer Organization Computer Architecture
Microprogrammed Control 85 Sequencing

MAPPING OF INSTRUCTIONS
Direct Mapping Address
OP-codes of Instructions 0000 ADD Routine
0001 AND Routine
ADD 0000 0010
. LDA Routine
AND 0001 0011
. STA Routine
LDA 0010 . 0100 BUN Routine
STA 0011
Control
BUN 0100 Storage

Mapping
Bits 10 xxxx 010
Address
10 0000 010 ADD Routine

10 0001 010 AND Routine

10 0010 010 LDA Routine

10 0011 010 STA Routine

10 0100 010 BUN Routine

Computer Organization Computer Architecture


Microprogrammed Control 86 Sequencing

MAPPING OF INSTRUCTIONS TO MICROROUTINES


Mapping from the OP-code of an instruction to the
address of the Microinstruction which is the starting
microinstruction of its execution microprogram

Machine OP-code
Instruction 1 0 1 1 Address

Mapping bits 0 x x x x 0 0
Microinstruction
address 0 1 0 1 1 0 0

Mapping function implemented by ROM or PLA


OP-code

Mapping memory
(ROM or PLA)

Control address register

Control Memory

Computer Organization Computer Architecture


Microprogrammed Control 87 Microprogram

MICROPROGRAM EXAMPLE
Computer Configuration

MUX
10 0
AR
Address Memory
10 0 2048 x 16
PC

MUX

15 0
6 0 6 0 DR
SBR CAR

Control memory Arithmetic


128 x 20 logic and
shift unit
Control unit
15 0
AC

Computer Organization Computer Architecture


Microprogrammed Control 88 Microprogram

MACHINE INSTRUCTION FORMAT

Machine instruction format


15 14 11 10 0
I Opcode Address

Sample machine instructions


Symbol OP-code Description
EA is the effective address
ADD 0000 AC ← AC + M[EA]
BRANCH 0001 if (AC < 0) then (PC ← EA)
STORE 0010 M[EA] ← AC
EXCHANGE 0011 AC ← M[EA], M[EA] ← AC

Microinstruction Format
3 3 3 2 2 7
F1 F2 F3 CD BR AD

F1, F2, F3: Microoperation fields


CD: Condition for branching
BR: Branch field
AD: Address field

Computer Organization Computer Architecture


Microprogrammed Control 89 Microprogram

MICROINSTRUCTION FIELD DESCRIPTIONS - F1,F2,F3


F1 Microoperation Symbol F2 Microoperation Symbol
000 None NOP 000 None NOP
001 AC ← AC + DR ADD 001 AC ← AC - DR SUB
010 AC ← 0 CLRAC 010 AC ← AC ∨ DR OR
011 AC ← AC + 1 INCAC 011 AC ← AC ∧ DR AND
100 AC ← DR DRTAC 100 DR ← M[AR]READ
101 AR ← DR(0-10) DRTAR 101 DR ← AC ACTDR
110 AR ← PC PCTAR 110 DR ← DR + 1 INCDR
111 M[AR] ← DRWRITE 111 DR(0-10) ← PC PCTDR

F3 Microoperation Symbol
000 None NOP
001 AC ← AC ⊕ DR XOR
010 AC ← AC’ COM
011 AC ← shl AC SHL
100 AC ← shr AC SHR
101 PC ← PC + 1 INCPC
110 PC ← AR ARTPC
111 Reserved

Computer Organization Computer Architecture


Microprogrammed Control 90 Microprogram

MICROINSTRUCTION FIELD DESCRIPTIONS - CD, BR

CD Condition Symbol Comments


00 Always = 1 U Unconditional branch
01 DR(15) I Indirect address bit
10 AC(15) S Sign bit of AC
11 AC = 0 Z Zero value in AC

BR Symbol Function
00 JMP CAR ← AD if condition = 1
CAR ← CAR + 1 if condition = 0
01 CALL CAR ← AD, SBR ← CAR + 1 if condition
=1
CAR ← CAR + 1 if condition = 0
10 RET CAR ← SBR (Return from subroutine)
11 MAP CAR(2-5) ← DR(11-14), CAR(0,1,6) ← 0

Computer Organization Computer Architecture


Microprogrammed Control 91 Microprogram

SYMBOLIC MICROINSTRUCTIONS
• Symbols are used in microinstructions as in assembly language
• A symbolic microprogram can be translated into its binary equivalent
by a microprogram assembler.

Sample Format
five fields: label; micro-ops; CD; BR; AD

Label: may be empty or may specify a symbolic


address terminated with a colon

Micro-ops: consists of one, two, or three symbols


separated by commas

CD: one of {U, I, S, Z}, where U: Unconditional Branch


I: Indirect address bit
S: Sign of AC
Z: Zero value in AC

BR: one of {JMP, CALL, RET, MAP}

AD: one of {Symbolic address, NEXT, empty}

Computer Organization Computer Architecture


Microprogrammed Control 92 Microprogram

SYMBOLIC MICROPROGRAM - FETCH ROUTINE


During FETCH, Read an instruction from memory
and decode the instruction and update PC

Sequence of microoperations in the fetch cycle:


AR ← PC
DR ← M[AR], PC ← PC + 1
AR ← DR(0-10), CAR(2-5) ← DR(11-14), CAR(0,1,6) ←
0

Symbolic microprogram for the fetch cycle:


ORG 64
FETCH: PCTAR U JMP NEXT
READ, INCPC U JMP NEXT
DRTAR U MAP

Binary equivalents translated by an assembler


Binary
address F1 F2 F3 CD BR AD
1000000 110 000 000 00 00 1000001
1000001 000 100 101 00 00 1000010
1000010 101 000 000 00 11 0000000

Computer Organization Computer Architecture


Microprogrammed Control 93 Microprogram

SYMBOLIC MICROPROGRAM
• Control Storage: 128 20-bit words
• The first 64 words: Routines for the 16 machine instructions
• The last 64 words: Used for other purpose (e.g., fetch routine and other subroutines)
• Mapping: OP-code XXXX into 0XXXX00, the first address for the 16 routines are
0(0 0000 00), 4(0 0001 00), 8, 12, 16, 20, ..., 60

Partial Symbolic Microprogram


Label Microops CD BR AD
ORG 0
ADD: NOP I CALL INDRCT
READ U JMP NEXT
ADD U JMP FETCH

ORG 4
BRANCH: NOP S JMP OVER
NOP U JMP FETCH
OVER: NOP I CALL INDRCT
ARTPC U JMP FETCH

ORG 8
STORE: NOP I CALL INDRCT
ACTDR U JMP NEXT
WRITE U JMP FETCH

ORG 12
EXCHANGE: NOP I CALL INDRCT
READ U JMP NEXT
ACTDR, DRTAC U JMP NEXT
WRITE U JMP FETCH

ORG 64
FETCH: PCTAR U JMP NEXT
READ, INCPC U JMP NEXT
DRTAR U MAP
INDRCT: READ U JMP NEXT
DRTAR U RET

Computer Organization Computer Architecture


Microprogrammed Control 94 Microprogram

BINARY
MICROPROGRAM
Address Binary Microinstruction
Micro Routine Decimal Binary F1 F2 F3 CD BR
AD
ADD 0 0000000 000 000 000 01 01 1000011
1 0000001 000 100 000 00 00
0000010
2 0000010 001 000 000 00 00
1000000
3 0000011 000 000 000 00 00
1000000
BRANCH 4 0000100 000 000 000 10 00
0000110
5 0000101 000 000 000 00 00 1000000
6 0000110 000 000 000 01 01 1000011
7 0000111 000 000 110 00 00 1000000
STORE 8 0001000 000 000 000 01 01 1000011
9 0001001 000 101 000 00 00 0001010
10 0001010 111 000 000 00 00
1000000
11 0001011 000 000 000 00 00
1000000
EXCHANGE 12 0001100 000 000 000 01 01 1000011
13 0001101 001 000 000 00 00
0001110
14 0001110 100 101 000 00 00
0001111
15 0001111 111 000 000 00 00
This microprogram can be implemented using ROM
1000000

FETCH 64 1000000 110 000 000 00 00 1000001


65 1000001 000 100 101 00 00
Computer Organization
1000010 Computer Architecture
Microprogrammed Control 95 Design of Control Unit
DESIGN OF CONTROL UNIT
- DECODING ALU CONTROL INFORMATION -

microoperation fields
F1 F2 F3

3 x 8 decoder 3 x 8 decoder 3 x 8 decoder


7 6 54 3 21 0 7 6 54 3 21 0 76 54 3 21 0

AND
ADD AC
Arithmetic
logic and DR
DRTAC shift unit
PCTAR

DRTAR

From From
PC DR(0-10) Load
AC

Select 0 1
Multiplexers

Load Clock
AR

Decoding of Microoperation Fields


Computer Organization Computer Architecture
Microprogrammed Control 96 Design of Control Unit
MICROPROGRAM SEQUENCER
- NEXT MICROINSTRUCTION ADDRESS LOGIC -
Branch, CALL Address
External RETURN form Subroutine
(MAP)
In-Line
S1S0 Address Source
00 CAR + 1, In-Line 3 2 1 0
S1 MUX1 L
01 SBR RETURN SBR Subroutine
10 CS(AD), Branch or CALL S0 CALL
Address
11 MAP source
selection
Incrementer

Clock CAR

Control Storage

MUX-1 selects an address from one of four sources and routes it into a CAR

- In-Line Sequencing → CAR + 1


- Branch, Subroutine Call → CS(AD)
- Return from Subroutine → Output of SBR
- New Machine instruction → MAP
Computer Organization Computer Architecture
Microprogrammed Control 97 Design of Control Unit
MICROPROGRAM SEQUENCER
- CONDITION AND BRANCH CONTROL -

1 L L(load SBR with PC)


From I MUX2 Test
CPU S T for subroutine Call
BR field Input
Z Select I0 logic S0 for next address
of CS I1
S1 selection
CD Field of CS

Input Logic
I1I0T Meaning Source of Address S1S0 L

000 In-Line CAR+1 00 0


001 JMP CS(AD) 01 0
010 In-Line CAR+1 00 0
011 CALL CS(AD) and SBR <- CAR+1 01 1
10x RET SBR 10 0
11x MAP DR(11-14) 11 0

S1 = I 1
S0 = I1I0 + I1’T
L = I1’I0T

Computer Organization Computer Architecture


Microprogrammed Control 98 Design of Control Unit

MICROPROGRAM SEQUENCER
External
(MAP)

L
I0 3 2 1 0
Input Load
I1 S1 MUX1 SBR
logic
T S0

1 Incrementer
I MUX2 Test
S
Z Select
Clock CAR

Control memory

Microops CD BR AD
... ...

Computer Organization Computer Architecture


Microprogrammed Control 99 Microinstruction Format

MICROINSTRUCTION FORMAT

Information in a Microinstruction
- Control Information
- Sequencing Information
- Constant
Information which is useful when feeding into the system

These information needs to be organized in some way for


- Efficient use of the microinstruction bits
- Fast decoding

Field Encoding

- Encoding the microinstruction bits


- Encoding slows down the execution speed
due to the decoding delay
- Encoding also reduces the flexibility due to
the decoding hardware

Computer Organization Computer Architecture


Microprogrammed Control 10 Microinstruction Format
HORIZONTAL AND VERTICAL
0
MICROINSTRUCTION FORMAT
Horizontal Microinstructions
Each bit directly controls each micro-operation or each control point
Horizontal implies a long microinstruction word
Advantages: Can control a variety of components operating in parallel.
--> Advantage of efficient hardware utilization
Disadvantages: Control word bits are not fully utilized
--> CS becomes large --> Costly
Vertical Microinstructions
A microinstruction format that is not horizontal
Vertical implies a short microinstruction word
Encoded Microinstruction fields
--> Needs decoding circuits for one or two levels of decoding

One-level decoding Two-level decoding

Field A Field B
Field A Field B
2 bits 6 bits
2 bits 3 bits

2x4 6 x 64
2x4 3x8 Decoder Decoder
Decoder Decoder

Decoder and
1 of 4 1 of 8 selection logic

Computer Organization Computer Architecture


Microprogrammed Control 10 Control Storage Hierarchy
1
NANOSTORAGE AND NANOINSTRUCTION
The decoder circuits in a vertical microprogram
storage organization can be replaced by a ROM
=> Two levels of control storage
First level - Control Storage
Second level - Nano Storage

Two-level microprogram

First level
-Vertical format Microprogram
Second level
-Horizontal format Nanoprogram
- Interprets the microinstruction fields, thus converts a vertical
microinstruction format into a horizontal
nanoinstruction format.

Usually, the microprogram consists of a large number of short


microinstructions, while the nanoprogram contains fewer words with
longer nanoinstructions.

Computer Organization Computer Architecture


Microprogrammed Control 10 Control Storage Hierarchy
2
TWO-LEVEL MICROPROGRAMMING - EXAMPLE
* Microprogram: 2048 microinstructions of 200 bits each
* With 1-Level Control Storage: 2048 x 200 = 409,600 bits
* Assumption:
256 distinct microinstructions among 2048
* With 2-Level Control Storage:
Nano Storage: 256 x 200 bits to store 256 distinct nanoinstructions
Control storage: 2048 x 8 bits
To address 256 nano storage locations 8 bits are needed
* Total 1-Level control storage: 409,600 bits
Total 2-Level control storage: 67,584 bits (256 x 200 + 2048 x 8)

Control address register

11 bits

Control memory
2048 x 8

Microinstruction (8 bits)
Nanomemory address

Nanomemory
256 x 200

Nanoinstructions (200 bits)

Computer Organization Computer Architecture


Central Processing Unit 10
3
Overview

• Instruction Set Processor (ISP)


• Central Processing Unit (CPU)
• A typical computing task consists of a series of
steps specified by a sequence of machine
instructions that constitute a program.
• An instruction is executed by carrying out a
sequence of more rudimentary operations.

Computer Organization Computer Architecture


Central Processing Unit 10
4
Fundamental Concepts

• Processor fetches one instruction at a time and


perform the operation specified.
• Instructions are fetched from successive
memory locations until a branch or a jump
instruction is encountered.
• Processor keeps track of the address of the
memory location containing the next instruction
to be fetched using Program Counter (PC).
• Instruction Register (IR)

Computer Organization Computer Architecture


Central Processing Unit 10
5
Executing an Instruction

• Fetch the contents of the memory location


pointed to by the PC. The contents of this
location are loaded into the IR (fetch phase).
IR ← [[PC]]
• Assuming that the memory is byte addressable,
increment the contents of the PC by 4 (fetch
phase).
PC ← [PC] + 4
• Carry out the actions specified by the instruction
in the IR (execution phase).

Computer Organization Computer Architecture


Central Processing Unit 10
6
Processor Organization

MDR HAS
TWO INPUTS
AND TWO
OUTPUTS

Datapath

Textbook Page 413

Computer Organization Computer Architecture


Central Processing Unit 10
7
Executing an Instruction

• Transfer a word of data from one processor register


to another or to the ALU.
• Perform an arithmetic or a logic operation and store
the result in a processor register.
• Fetch the contents of a given memory location and
load them into a processor register.
• Store a word of data from a processor register into a
given memory location.

Computer Organization Computer Architecture


Central Processing Unit 10
8
Register Transfers
Internal
bu
processor
s
Rii
n

Ri

Riou
t
Yi
n

Constant
4
Selec MUX
t

A B
AL
U

Zi
n

Zou
t

Figure 7.2. Input and output gating for the registers in


Figure 7.1.
Computer Organization Computer Architecture
Central Processing Unit 10
9
Register Transfers

• All operations and data transfers are controlled by the processor


clock.

Figure 7.3. Input and output gating for one register bit.
Computer Organization Computer Architecture
Central Processing Unit 11
0
Performing an Arithmetic or Logic
Operation
• The ALU is a combinational circuit that has no
internal storage.
• ALU gets the two operands from MUX and bus.
The result is temporarily stored in register Z.
• What is the sequence of operations to add the
contents of register R1 to those of R2 and store
the result in R3?
1. R1out, Yin
2. R2out, SelectY, Add, Zin
3. Zout, R3in

Computer Organization Computer Architecture


Central Processing Unit 11
1
Fetching a Word from Memory

• Address into MAR; issue Read operation; data into MDR.

Figure 7.4. Connection and control signals for register MDR.


Computer Organization Computer Architecture
Central Processing Unit 11
2
Fetching a Word from Memory

• The response time of each memory access


varies (cache miss, memory-mapped I/O,…).
• To accommodate this, the processor waits until it
receives an indication that the requested
operation has been completed
(Memory-Function-Completed, MFC).
• Move (R1), R2
MAR ← [R1]
Start a Read operation on the memory bus
Wait for the MFC response from the memory
Load MDR from the memory bus
R2 ← [MDR]

Computer Organization Computer Architecture


Central Processing Unit 11
3
Timing

MAR ← [R1]

Assume MAR
is always available
on the address lines
of the memory bus.

Start a Read operation on the memory bus

Wait for the MFC response from the memory

Load MDR from the memory bus

R2 ← [MDR]

Computer Organization Computer Architecture


Central Processing Unit 11
4
Execution of a Complete
Instruction
• Add (R3), R1
• Fetch the instruction
• Fetch the first operand (the contents of the memory
location pointed to by R3)
• Perform the addition
• Load the result into R1

Computer Organization Computer Architecture


Central Processing Unit 11
5
Architecture
Internal
bu
processor
s
Rii
n

Ri

Riou
t
Yi
n

Constant
4
Selec MUX
t

A B
AL
U

Zi
n

Zou
t

Figure 7.2. Input and output gating for the registers in


Figure 7.1.
Computer Organization Computer Architecture
Central Processing Unit 11
6
Execution of a Complete
Instruction
Add (R3), R1

Computer Organization Computer Architecture


Central Processing Unit 11
7
Execution of Branch Instructions

• A branch instruction replaces the contents of PC


with the branch target address, which is usually
obtained by adding an offset X given in the branch
instruction.
• The offset X is usually the difference between the
branch target address and the address immediately
following the branch instruction.
• Conditional branch

Computer Organization Computer Architecture


Central Processing Unit 11
8
Execution of Branch Instructions

Step Action

1 P ou , MA i , Read Select4 Add Z i


C t R n , , , n
2 Zou , P i , Yi , WMF C
t C n n
3 MDR ou , IR i
t n
4 Offset-field-of-Iou , Add Z i
R t , n
5 Z ou , P i , En
t C n d

Figure 7.7. Control sequence for an unconditional branch


instruction.

Computer Organization Computer Architecture


Central Processing Unit 11
9
Multiple-Bus Organization

Computer Organization Computer Architecture


Central Processing Unit 12
0
Multiple-Bus Organization

• Add R4, R5, R6

Step Action

1 PC , R=B, MAR , Read, IncPC


ou i
2 WMF
t C
n

3 MDR , R=B, IR
out i
B n
4 R4 , R5 , SelectA, Add, R6 , End
out out i
A B n

Figure 7.9. Control sequence for the instruction. Add R4,R5,R6,


for the three-bus organization in Figure 7.8.

Computer Organization Computer Architecture


Central Processing Unit 12
1
Quiz

• What is the control


sequence for execution
of the instruction
Add R1, R2
including the
instruction fetch phase?
(Assume single bus
architecture)

Computer Organization Computer Architecture


Central Processing Unit 12
2
Control Unit Organization
CL Control
Cloc K counte
step
k r

Extern
alinput
s
Decoder
IR /encode
r
Conditio
n code
s

Control
signals

Figure 7.10. Control unit


organization.
Computer Organization Computer Architecture
Central Processing Unit 12
3
Detailed Block Description

Computer Organization Computer Architecture


Central Processing Unit 12
4
Generating Zin

• Zin = T1 + T6 • ADD + T4 • BR + …
Branc Ad
h d

T4 T6

T1

Figure 7.12. Generation of the Zin control signal for the processor in
Figure 7.1.
Computer Organization Computer Architecture
Central Processing Unit 12
5
Generating End

• End = T7 • ADD + T5 • BR + (T5 • N + T4 • N) • BRN +…

Computer Organization Computer Architecture


Central Processing Unit 12
6
A Complete Processor

Computer Organization Computer Architecture


Microprogrammed Control 12
7
Overview

• Control signals are generated by a program similar to machine


language programs.
• Control Word (CW); microroutine; microinstruction

Computer Organization Computer Architecture


Microprogrammed Control 12
8
Overview

Computer Organization Computer Architecture


Microprogrammed Control 12
9
Overview

• Control store

One function
cannot be carried
out by this simple
organization.

Computer Organization Computer Architecture


Microprogrammed Control 13
0
Overview

• The previous organization cannot handle the situation when the


control unit is required to check the status of the condition codes
or external inputs to choose between alternative courses of action.
• Use conditional branch microinstruction.
AddressMicroinstructio
n
0 PCou , MA i , Read Select4 Add Z i
t R n , , , n
1 Zou , PCi , Y i , WM C
t n n F
2 MDRou , IR i
t n
3 Branc t startin addreso appropriat microrouti
. . . . . . . . . . . .h. . . . . . . o
. . . g. . . . . . . .s. . . . . . . .f . . e. . . . . . . . . . . .ne
. ... ... .. ... ..
25 I N=0, the branc t microinstructi 0
f n h o on
26 Offset-field-of-I ou , SelectY Add Z i
R t , , n
27 Zou , PCi , En
t n d

Figure 7.17. Microroutine for the instruction


Computer Organization Branch<0. Computer Architecture
Microprogrammed Control 13
1
Overview
Extern
alinput
s
Starting
and
branch Conditio
IR
address
generato n code
s
r

Cloc μPC
k

Contr
olstor C
e W

Figure 7.18. Organization of the control unit to allow


conditional branching in the microprogram.

Computer Organization Computer Architecture


Microprogrammed Control 13
2
Microinstructions

• A straightforward way to structure microinstructions


is to assign one bit position to each control signal.
• However, this is very inefficient.
• The length can be reduced: most signals are not
needed simultaneously, and many signals are
mutually exclusive.
• All mutually exclusive signals are placed in the same
group in binary coding.

Computer Organization Computer Architecture


Microprogrammed Control 13
3
Partial Format for the
Microinstructions

What is the price paid for


this scheme?

Computer Organization Computer Architecture


Microprogrammed Control 13
4
Further Improvement

• Enumerate the patterns of required signals in all


possible microinstructions. Each meaningful
combination of active control signals can then be
assigned a distinct code.
• Vertical organization
• Horizontal organization

Computer Organization Computer Architecture


Microprogrammed Control 13
5
Microprogram Sequencing

• If all microprograms require only straightforward


sequential execution of microinstructions except
for branches, letting a μPC governs the
sequencing would be efficient.
• However, two disadvantages:
Having a separate microroutine for each machine instruction
results in a large total number of microinstructions and a large
control store.
Longer execution time because it takes more time to carry out
the required branches.
• Example: Add src, Rdst
• Four addressing modes: register, autoincrement,
autodecrement, and indexed (with indirect
forms).
Computer Organization Computer Architecture
Microprogrammed Control 13
6

- Bit-ORing
- Wide-Branch Addressing
- WMFC

Computer Organization Computer Architecture


Microprogrammed Control 13
7
Mod
e

Contents of OP 0 1 0 Rsr Rds


IR code c t
1 1 87 4 3 0
1 0

Addres Microinstructi
s
(octal on
)
00 P ou , i , Read, 4, Add, i
0
00 C
Zout , PCi , Yni ,Select
MAR WMFC Z n

1
00 t
MD n n
ou , IRi
2
00 R t n
μBranchμ P ← 101 (from Instruction
3 {
μP ←
C ] decoder);
μP ← [I ] ⋅ [I ] ⋅ [I
5,4 10,9 3 1 9 8]
12 C
Rsr ou , [IR ; C R 0 R R }
i , Read, Select4, Add,
i
1
12 c
Z ,t MAR n Z
n
ou i
2 t Rsrc n
μBranch μP ←
12 μP 0← [I 8]},
3
17 {
MD ou , C 170; C R WMFC
i , Read,
0
17 R
MD
t MAR n WMFC
,Y
ou i
1
17 R
Rdsout , n , Add, i
2
17 tZ ,t SelectY
, Z n
ou i
3 t Rdst n End

Figure Microinstruction for Add


7.21. (Rsrc)+,Rdst.
Note Microinstruction at location 170 is not executed for this addressing
: mode.

Computer Organization Computer Architecture


Microprogrammed Control 13
8
Microinstructions with
Next-Address Field
• The microprogram we discussed requires
several branch microinstructions, which perform
no useful operation in the datapath.
• A powerful alternative approach is to include an
address field as a part of every microinstruction
to indicate the location of the next
microinstruction to be fetched.
• Pros: separate branch microinstructions are
virtually eliminated; few limitations in assigning
addresses to microinstructions.
• Cons: additional bits for the address field
(around 1/6)

Computer Organization Computer Architecture


Microprogrammed Control 13
9
Microinstructions with
Next-Address Field

Computer Organization Computer Architecture


Microprogrammed Control 14
0

Computer Organization Computer Architecture


Microprogrammed Control 14
1
Implementation of the Microroutine

Computer Organization Computer Architecture


Microprogrammed Control 14
2

Computer Organization Computer Architecture


Microprogrammed Control 14
3
bit-ORing

Computer Organization Computer Architecture


Pipelining and Vector Processing 14
4
PIPELINING AND VECTOR PROCESSING

• Parallel Processing

• Pipelining

• Arithmetic Pipeline

• Instruction Pipeline

• RISC Pipeline

• Vector Processing

• Array Processors

Computer Organization Computer Architecture


Pipelining and Vector Processing 14 Parallel Processing

5
PARALLEL PROCESSING

Execution of Concurrent Events in the computing


process to achieve faster Computational Speed

Levels of Parallel Processing

- Job or Program level

- Task or Procedure level

- Inter-Instruction level

- Intra-Instruction level

Computer Organization Computer Architecture


Pipelining and Vector Processing 14 Parallel Processing

6
PARALLEL COMPUTERS
Architectural Classification

– Flynn's classification
» Based on the multiplicity of Instruction Streams and Data
Streams
» Instruction Stream
• Sequence of Instructions read from memory
» Data Stream
• Operations performed on the data in the processor

Number of Data Streams


Single Multiple

Number of Single SISD SIMD


Instruction
Streams Multiple MISD MIMD

Computer Organization Computer Architecture


Pipelining and Vector Processing 14 Parallel Processing

COMPUTER ARCHITECTURES FOR PARALLEL


7
PROCESSING
SISD Superscalar processors
Von-Neuman
based Superpipelined processors

VLIW

MISD Nonexistence

SIMD Array processors

Systolic arrays
Dataflow Associative processors

MIMD Shared-memory multiprocessors


Reduction Bus based
Crossbar switch based
Multistage IN based

Message-passing multicomputers

Hypercube
Mesh
Reconfigurable

Computer Organization Computer Architecture


Pipelining and Vector Processing 14 Parallel Processing

8
SISD COMPUTER SYSTEMS
Control Processor Data stream Memory
Unit Unit

Instruction stream

Characteristics
- Standard von Neumann machine
- Instructions and data are stored in memory
- One operation at a time

Limitations
Von Neumann bottleneck

Maximum speed of the system is limited by the


Memory Bandwidth (bits/sec or bytes/sec)

- Limitation on Memory Bandwidth


- Memory is shared by CPU and I/O

Computer Organization Computer Architecture


Pipelining and Vector Processing 14 Parallel Processing

9
SISD PERFORMANCE IMPROVEMENTS

• Multiprogramming
• Spooling
• Multifunction processor
• Pipelining
• Exploiting instruction-level parallelism
- Superscalar
- Superpipelining
- VLIW (Very Long Instruction Word)

Computer Organization Computer Architecture


Pipelining and Vector Processing 15 Parallel Processing

0
MISD COMPUTER SYSTEMS

M CU P

M CU P
Memory
• •
• •
• •

M CU Data stream
P

Instruction stream

Characteristics
- There is no computer at present that can be
classified as MISD

Computer Organization Computer Architecture


Pipelining and Vector Processing 15 Parallel Processing

1
SIMD COMPUTER SYSTEMS
Memory
Data bus

Control Unit
Instruction stream

P P ••• P Processor units

Data stream

Alignment network

M M ••• M Memory modules

Characteristics
- Only one copy of the program exists
- A single controller executes one instruction at a time

Computer Organization Computer Architecture


Pipelining and Vector Processing 15 Parallel Processing

2
TYPES OF SIMD COMPUTERS

Array Processors
- The control unit broadcasts instructions to all PEs,
and all active PEs execute the same instructions
- ILLIAC IV, GF-11, Connection Machine, DAP, MPP

Systolic Arrays
- Regular arrangement of a large number of
very simple processors constructed on
VLSI circuits
- CMU Warp, Purdue CHiP

Associative Processors
- Content addressing
- Data transformation operations over many sets
of arguments with a single instruction
- STARAN, PEPE
Computer Organization Computer Architecture
Pipelining and Vector Processing 15 Parallel Processing

3
MIMD COMPUTER SYSTEMS
P M P M ••• P M

Interconnection Network

Shared Memory

Characteristics
- Multiple processing units

- Execution of multiple instructions on multiple data

Types of MIMD computer systems


- Shared memory multiprocessors

- Message-passing multicomputers

Computer Organization Computer Architecture


Pipelining and Vector Processing 15 Parallel Processing

4
SHARED MEMORY MULTIPROCESSORS
M M ••• M

Buses,
Interconnection Network(IN) Multistage IN,
Crossbar Switch

P P ••• P

Characteristics
All processors have equally direct access to
one large memory address space
Example systems
Bus and cache-based systems
- Sequent Balance, Encore Multimax
Multistage IN-based systems
- Ultracomputer, Butterfly, RP3, HEP
Crossbar switch-based systems
- C.mmp, Alliant FX/8
Limitations
Memory access latency
Hot spot problem
Computer Organization Computer Architecture
Pipelining and Vector Processing 15 Parallel Processing

5
MESSAGE-PASSING MULTICOMPUTER
Message-Passing Network Point-to-point connections

P P ••• P

M M ••• M

Characteristics
- Interconnected computers
- Each processor has its own memory, and
communicate via message-passing

Example systems
- Tree structure: Teradata, DADO
- Mesh-connected: Rediflow, Series 2010, J-Machine
- Hypercube: Cosmic Cube, iPSC, NCUBE, FPS T Series, Mark III

Limitations

- Communication overhead
- Hard to programming
Computer Organization Computer Architecture
Pipelining and Vector Processing 15 Pipelining

6
PIPELINING
A technique of decomposing a sequential process
into suboperations, with each subprocess being
executed in a partial dedicated segment that
operates concurrently with all other segments.
Ai * B i + C i for i = 1, 2, 3, ... , 7
Ai Bi Memory Ci

Segment 1
R1 R2

Multiplier
Segment 2

R3 R4

Adder
Segment 3

R5

R1 ← Ai, R2 ← Bi Load Ai and Bi


R3 ← R1 * R2, R4 ← Ci Multiply and load Ci
R5 ← R3 + R4 Add

Computer Organization Computer Architecture


Pipelining and Vector Processing 15 Pipelining

7
OPERATIONS IN EACH PIPELINE STAGE

Clock
Pulse Segment 1 Segment 2 Segment 3

Number R1 R2 R3 R4 R5
1 A1 B1
2 A2 B2 A1 * B1 C1
3 A3 B3 A2 * B2 C2 A1 * B1 + C1
4 A4 B4 A3 * B3 C3 A2 * B2 + C2
5 A5 B5 A4 * B4 C4 A3 * B3 + C3
6 A6 B6 A5 * B5 C5 A4 * B4 + C4
7 A7 B7 A6 * B6 C6 A5 * B5 + C5
8 A7 * B7 C7 A6 * B6 + C6
9 A7 * B7 + C7

Computer Organization Computer Architecture


Pipelining and Vector Processing 15 Pipelining

8
GENERAL PIPELINE
General Structure of a 4-Segment Pipeline
Clock

Input S1 R1 S2 R2 S3 R3 S4 R4

Space-Time Diagram
1 2 3 4 5 6 7 8 9
Clock cycles
Segment 1 T1 T2 T3 T4 T5 T6

2 T1 T2 T3 T4 T5 T6

3 T1 T2 T3 T4 T5 T6

4 T1 T2 T3 T4 T5 T6

Computer Organization Computer Architecture


Pipelining and Vector Processing 15 Pipelining

9
PIPELINE SPEEDUP
n: Number of tasks to be performed

Conventional Machine (Non-Pipelined)


tn: Clock cycle
τ1: Time required to complete the n tasks
τ1 = n * t n

Pipelined Machine (k stages)


tp: Clock cycle (time to complete each suboperation)
τκ: Time required to complete the n tasks
τκ = (k + n - 1) * tp

Speedup
Sk: Speedup

Sk = n*tn / (k + n - 1)*tp

tn
lim Sk = ( = k, if tn = k * tp )
n→∞ tp

Computer Organization Computer Architecture


Pipelining and Vector Processing 16 Pipelining

0
PIPELINE AND MULTIPLE FUNCTION UNITS
Example
- 4-stage pipeline
- subopertion in each stage; tp = 20nS
- 100 tasks to be executed
- 1 task in non-pipelined system; 20*4 = 80nS

Pipelined System
(k + n - 1)*tp = (4 + 99) * 20 = 2060nS

Non-Pipelined System
n*k*tp = 100 * 80 = 8000nS

Speedup
Sk = 8000 / 2060 = 3.88

4-Stage Pipeline is basically identical to the system


with 4 identical function units

Multiple Functional Units

Computer Organization Computer Architecture


Pipelining and Vector Processing 16 Arithmetic Pipeline

1
ARITHMETIC PIPELINE
Floating-point adder Exponents
a b
Mantissas
A B
X = A x 2a
Y = B x 2b
R R

[1] Compare the exponents Compare Difference


[2] Align the mantissa Segment 1: exponents
[3] Add/sub the mantissa by subtraction
[4] Normalize the result

Segment 2: Choose exponent Align mantissa

Segment 3: Add or subtract


mantissas

R R

Segment 4: Adjust Normalize


exponent result

R R

Computer Organization Computer Architecture


Pipelining and Vector Processing 16 Arithmetic Pipeline

2
4-STAGE FLOATING POINT ADDER
A = a x 2p B = b x 2q
p a q b

Stages: Other
Exponent fraction Fraction
S1 subtractor selector
Fraction with min(p,q)
r = max(p,q)
Right shifter
t = |p - q|

S2 Fraction
adder
r c

Leading zero
S3 counter
c
Left shifter
r

d
Exponent
S4 adder

s d
C = A + B = c x 2r = d x 2s
(r = max (p,q), 0.5 ≤ d < 1)

Computer Organization Computer Architecture


Pipelining and Vector Processing 16 Instruction Pipeline

3
INSTRUCTION CYCLE
Six Phases* in an Instruction Cycle
[1] Fetch an instruction from memory
[2] Decode the instruction
[3] Calculate the effective address of the operand
[4] Fetch the operands from memory
[5] Execute the operation
[6] Store the result in the proper place

* Some instructions skip some phases


* Effective address calculation can be done in
the part of the decoding phase
* Storage of the operation result into a register
is done automatically in the execution phase

==> 4-Stage Pipeline

[1] FI: Fetch an instruction from memory


[2] DA: Decode the instruction and calculate
the effective address of the operand
[3] FO: Fetch the operand
[4] EX: Execute the operation

Computer Organization Computer Architecture


Pipelining and Vector Processing 16 Instruction Pipeline

4
INSTRUCTION PIPELINE

Execution of Three Instructions in a 4-Stage Pipeline


Conventional

i FI DA FO EX

i+1 FI DA FO EX

i+2 FI DA FO EX

Pipelined

i FI DA FO EX
i+1 FI DA FO EX
i+2 FI DA FO EX

Computer Organization Computer Architecture


Pipelining and Vector Processing 16 Instruction Pipeline

5
INSTRUCTION EXECUTION IN A 4-STAGE PIPELINE

Segment1: Fetch instruction


from memory

Decode instruction
Segment2: and calculate
effective address

Branch?
yes
no
Segment3: Fetch operand
from memory

Segment4: Execute instruction

Interrupt yes
Interrupt?
handling
no
Update PC

Empty pipe
Step: 1 2 3 4 5 6 7 8 9 10 11 12 13
Instruction 1 FI DA FO EX
2 FI DA FO EX
(Branch) 3 FI DA FO EX
4 FI FI DA FO EX
5 FI DA FO EX
6 FI DA FO EX
7 FI DA FO EX

Computer Organization Computer Architecture


Pipelining and Vector Processing 16 Instruction Pipeline

6
MAJOR HAZARDS IN PIPELINED EXECUTION
Structural hazards(Resource Conflicts)
Hardware Resources required by the instructions in
simultaneous overlapped execution cannot be met
Data hazards (Data Dependency Conflicts)
An instruction scheduled to be executed in the pipeline requires the
result of a previous instruction, which is not yet available

R1 <- B + C ADD DA B,C + Data dependency


R1 <- R1 + 1
INC DA bubble R1 +1

Control hazards
Branches and other instructions that change the PC
make the fetch of the next instruction to be delayed
JMP ID PC + PC Branch address dependency

bubble IF ID OF OE OS

Hazards in pipelines may make it Pipeline Interlock:


necessary to stall the pipeline Detect Hazards Stall until it is cleared

Computer Organization Computer Architecture


Pipelining and Vector Processing 16 Instruction Pipeline

7
STRUCTURAL HAZARDS
Structural Hazards
Occur when some resource has not been
duplicated enough to allow all combinations
of instructions in the pipeline to execute

Example: With one memory-port, a data and an instruction fetch


cannot be initiated in the same clock
i FI D FO E
A X
i+ FI D FO E
1 A X
i+ stal stal FI D FO E
2 l l A X

The Pipeline is stalled for a structural hazard


<- Two Loads with one port memory
-> Two-port memory will serve without stall

Computer Organization Computer Architecture


Pipelining and Vector Processing 16 Instruction Pipeline

8
DATA HAZARDS
Data Hazards

Occurs when the execution of an instruction


depends on the results of a previous instruction
ADD R1, R2, R3
SUB R4, R1, R5
Data hazard can be dealt with either hardware
techniques or software technique
Hardware Technique

Interlock
- hardware detects the data dependencies and delays the scheduling
of the dependent instruction by stalling enough clock cycles
Forwarding (bypassing, short-circuiting)
- Accomplished by a data path that routes a value from a source
(usually an ALU) to a user, bypassing a designated register. This
allows the value to be produced to be used at an earlier stage in the
pipeline than would otherwise be possible

Software Technique
Instruction Scheduling(compiler) for delayed load
Computer Organization Computer Architecture
Pipelining and Vector Processing 16 Instruction Pipeline

9
FORWARDING HARDWARE
Example:
Register
file
ADD R1, R2, R3
SUB R4, R1, R5

3-stage Pipeline MUX MUX Bypass


path
I: Instruction Fetch Result
write bus
A: Decode, Read Registers,
ALU
ALU Operations
E: Write the result to the
destination register R4

ALU result buffer

ADD I A E

SUB I A E Without Bypassing

SUB I A E With Bypassing

Computer Organization Computer Architecture


Pipelining and Vector Processing 17 Instruction Pipeline

0
INSTRUCTION SCHEDULING
a = b + c;
d = e - f;

Unscheduled code: Scheduled Code:


LW Rb, b LW Rb, b
LW Rc, c LW Rc, c
ADD Ra, Rb, Rc LW Re, e
SW a, Ra ADD Ra, Rb, Rc
LW Re, e LW Rf, f
LW Rf, f SW a, Ra
SUB Rd, Re, Rf SUB Rd, Re, Rf
SW d, Rd SW d, Rd

Delayed Load
A load requiring that the following instruction not use its result

Computer Organization Computer Architecture


Pipelining and Vector Processing 17 Instruction Pipeline

1
CONTROL HAZARDS
Branch Instructions

- Branch target address is not known until


the branch instruction is completed
Branch
Instruction FI DA FO EX

Next
Instruction FI DA FO EX

Target address available

- Stall -> waste of cycle times

Dealing with Control Hazards

* Prefetch Target Instruction


* Branch Target Buffer
* Loop Buffer
* Branch Prediction
* Delayed Branch

Computer Organization Computer Architecture


Pipelining and Vector Processing 17 Instruction Pipeline

2
CONTROL HAZARDS
Prefetch Target Instruction
– Fetch instructions in both streams, branch not taken and branch taken
– Both are saved until branch branch is executed. Then, select the right
instruction stream and discard the wrong stream
Branch Target Buffer(BTB; Associative Memory)
– Entry: Addr of previously executed branches; Target instruction
and the next few instructions
– When fetching an instruction, search BTB.
– If found, fetch the instruction stream in BTB;
– If not, new stream is fetched and update BTB
Loop Buffer(High Speed Register file)
– Storage of entire loop that allows to execute a loop without accessing
memory
Branch Prediction
– Guessing the branch condition, and fetch an instruction stream based
on
the guess. Correct guess eliminates the branch penalty
Delayed Branch
– Compiler detects the branch and rearranges the instruction sequence
by inserting useful instructions that keep the pipeline busy
in the presence of a branch instruction

Computer Organization Computer Architecture


Pipelining and Vector Processing 17 RISC Pipeline

3
RISC PIPELINE
RISC
- Machine with a very fast clock cycle that
executes at the rate of one instruction per cycle
<- Simple Instruction Set
Fixed Length Instruction Format
Register-to-Register Operations

Instruction Cycles of Three-Stage Instruction Pipeline


Data Manipulation Instructions
I: Instruction Fetch
A: Decode, Read Registers, ALU Operations
E: Write a Register

Load and Store Instructions


I: Instruction Fetch
A: Decode, Evaluate Effective Address
E: Register-to-Memory or Memory-to-Register

Program Control Instructions


I: Instruction Fetch
A: Decode, Evaluate Branch Address
E: Write Register(PC)
Computer Organization Computer Architecture
Pipelining and Vector Processing 17 RISC Pipeline

4
DELAYED LOAD
LOAD: R1 ← M[address 1]
LOAD: R2 ← M[address 2]
ADD: R3 ← R1 + R2
STORE: M[address 3] ←
R3
Three-segment pipeline timing
Pipeline timing with data conflict

clock cycle 1 2 3 4 5 6
Load R1 I A E
Load R2 I A E
Add R1+R2 I A E
Store R3 I A E

Pipeline timing with delayed load

clock cycle 1 2 3 4 5 6 7
Load R1 I A E The data dependency is taken
Load R2 I A E care by the compiler rather
NOP I A E than the hardware
Add R1+R2 I A E
Store R3 I A E

Computer Organization Computer Architecture


Pipelining and Vector Processing 17 RISC Pipeline

5
DELAYED BRANCH
Compiler analyzes the instructions before and after
the branch and rearranges the program sequence by
inserting useful instructions in the delay steps

Using no-operation instructions

Rearranging the instructions

Computer Organization Computer Architecture


Pipelining and Vector Processing 17 Vector Processing

6
VECTOR PROCESSING
Vector Processing Applications
• Problems that can be efficiently formulated in terms of vectors
– Long-range weather forecasting
– Petroleum explorations
– Seismic data analysis
– Medical diagnosis
– Aerodynamics and space flight simulations
– Artificial intelligence and expert systems
– Mapping the human genome
– Image processing

Vector Processor (computer)


Ability to process vectors, and related data structures such as
matrices
and multi-dimensional arrays, much faster than conventional
computers

Vector Processors may also be pipelined

Computer Organization Computer Architecture


Pipelining and Vector Processing 17 Vector Processing

7
VECTOR PROGRAMMING

DO 20 I = 1, 100
20 C(I) = B(I) + A(I)

Conventional computer

Initialize I = 0
20 Read A(I)
Read B(I)
Store C(I) = A(I) + B(I)
Increment I = i + 1
If I ≤ 100 goto 20

Vector computer

C(1:100) = A(1:100) + B(1:100)

Computer Organization Computer Architecture


Pipelining and Vector Processing 17 Vector Processing

8
VECTOR INSTRUCTIONS
f1: V * V
f2: V * S
f3: V x V * V V: Vector operand
f4: V x S * V S: Scalar operand

Type Mnemonic Description (I = 1, ..., n)

f1 VSQR Vector square root B(I) * SQR(A(I))


VSINVector sine B(I) * sin(A(I))
VCOM Vector complement A(I) * A(I)
f2 VSUM Vector summation S * Σ A(I)
VMAX Vector maximum S * max{A(I)}
f3 VADD Vector add C(I) * A(I) + B(I)
VMPY Vector multiply C(I) * A(I) * B(I)
VAND Vector AND C(I) * A(I) . B(I)
VLAR Vector larger C(I) * max(A(I),B(I))
VTGE Vector test > C(I) * 0 if A(I) < B(I)
C(I) * 1 if A(I) > B(I)
f4 SADD Vector-scalar add B(I) * S + A(I)
SDIV Vector-scalar divide B(I) * A(I) / S

Computer Organization Computer Architecture


Pipelining and Vector Processing 17 Vector Processing

9
VECTOR INSTRUCTION FORMAT

Vector Instruction Format

Pipeline for Inner Product

Computer Organization Computer Architecture


Pipelining and Vector Processing 18 Vector Processing

0
MULTIPLE MEMORY MODULE AND INTERLEAVING

Multiple Module Memory


Address bus
M0 M1 M2 M3

AR AR AR AR

Memory Memory Memory Memory


array array array array

DR DR DR DR

Data bus

Address Interleaving

Different sets of addresses are assigned to


different memory modules

Computer Organization Computer Architecture


Multiprocessors 18
1
MULTIPROCESSORS

• Characteristics of Multiprocessors

• Interconnection Structures

• Interprocessor Arbitration

• Interprocessor Communication
and Synchronization

• Cache Coherence

Computer Organization Computer Architecture


Multiprocessors 18 Characteristics of Multiprocessors
2
TERMINOLOG
Y
Parallel Computing

Simultaneous use of multiple processors, all components


of a single architecture, to solve a task. Typically processors identical,
single user (even if machine multiuser)

Distributed Computing

Use of a network of processors, each capable of being


viewed as a computer in its own right, to solve a problem. Processors
may be heterogeneous, multiuser, usually individual task is assigned
to a single processors

Concurrent Computing

All of the above?

Computer Organization Computer Architecture


Multiprocessors 18 Characteristics of Multiprocessors
3
TERMINOLOG
Y
Supercomputing
Use of fastest, biggest machines to solve big, computationally
intensive problems. Historically machines were vector computers,
but parallel/vector or parallel becoming the norm

Pipelining
Breaking a task into steps performed by different units, and multiple
inputs stream through the units, with next input starting in a unit when
previous input done with the unit but not necessarily done with the task

Vector Computing
Use of vector processors, where operation such as multiply
broken into several steps, and is applied to a stream of operands
(“vectors”). Most common special case of pipelining

Systolic
Similar to pipelining, but units are not necessarily arranged linearly,
steps are typically small and more numerous, performed in lockstep
fashion. Often used in special-purpose hardware such as image or signal
processors
Computer Organization Computer Architecture
Multiprocessors 18 Characteristics of Multiprocessors
4
SPEEDUP AND EFFICIENCY
A: Given problem

T*(n): Time of best sequential algorithm to solve an


instance of A of size n on 1 processor
Tp(n): Time needed by a given parallel algorithm
and given parallel architecture to solve an
instance of A of size n, using p processors

Note: T*(n) ≤ T1(n)


Speedup
Speedup: T*(n) / Tp(n) Perfect Speedup

Efficiency: T*(n) / [pTp(n)]

1 2 3 4 5 6 7 8 9
Speedup should be between 0 and p, and 10 Processors
Efficiency should be between 0 and 1
Speedup is linear if there is a constant c > 0
so that speedup is always at least cp.
Computer Organization Computer Architecture
Multiprocessors 18 Characteristics of Multiprocessors
5
AMDAHL’S
LAW
Given a program
f : Fraction of time that represents operations
that must be performed serially

Maximum Possible Speedup: S


1
S ≤ , with p processors
f + (1 - f ) / p
S < 1/f , with unlimited number of processors

- Ignores possibility of new algorithm, with much smaller f

- Ignores possibility that more of program is run from higher speed


memory such as Registers, Cache, Main Memory

- Often problem is scaled with number of processors, and f is a


function of size which may be decreasing (Serial code may take
constant amount of time, independent of size)

Computer Organization Computer Architecture


Multiprocessors 18 Characteristics of Multiprocessors
6
FLYNN’s HARDWARE TAXONOMY
I: Instruction Stream M
D: Data Stream [M
S ] I [ S] D
SI: Single Instruction Stream
- All processors are executing the same instruction in the same cycle
- Instruction may be conditional
- For Multiple processors, the control processor issues an instruction
MI: Multiple Instruction Stream
- Different processors may be simultaneously
executing different instructions
SD: Single Data Stream
- All of the processors are operating on the same
data items at any given time
MD: Multiple Data Stream
- Different processors may be simultaneously
operating on different data items

SISD : standard serial computer


MISD : very rare
MIMD and SIMD : Parallel processing computers
Computer Organization Computer Architecture
Multiprocessors 18 Characteristics of Multiprocessors
7
COUPLING OF PROCESSORS

Tightly Coupled System


- Tasks and/or processors communicate in a highly synchronized
fashion
- Communicates through a common shared memory
- Shared memory system
Loosely Coupled System
- Tasks or processors do not communicate in a
synchronized fashion
- Communicates by message passing packets
- Overhead for data exchange is high
- Distributed memory system

Computer Organization Computer Architecture


Multiprocessors 18 Characteristics of Multiprocessors
8
GRANULARITY OF PARALLELISM
Granularity of Parallelism
Coarse-grain

- A task is broken into a handful of pieces, each


of which is executed by a powerful processor
- Processors may be heterogeneous
- Computation/communication ratio is very high

Medium-grain

- Tens to few thousands of pieces


- Processors typically run the same code
- Computation/communication ratio is often hundreds or more

Fine-grain

- Thousands to perhaps millions of small pieces, executed by very


small, simple processors or through pipelines
- Processors typically have instructions broadcasted to them
- Compute/communicate ratio often near unity
Computer Organization Computer Architecture
Multiprocessors 18 Characteristics of Multiprocessors
9
MEMOR
Shared (Global) Memory Y
- A Global Memory Space accessible by all processors
- Processors may also have some local memory
Distributed (Local, Message-Passing) Memory
- All memory units are associated with processors
- To retrieve information from another processor's
memory a message must be sent there
Uniform Memory
- All processors take the same time to reach all memory locations
Nonuniform (NUMA) Memory
- Memory access is not uniform

SHARED MEMORY
DISTRIBUTED MEMORY
Memory
Network

Network

Processors Processors/Memory

Computer Organization Computer Architecture


Multiprocessors 19 Characteristics of Multiprocessors
0
SHARED MEMORY MULTIPROCESSORS
M M M
...

Buses,
Interconnection Network Multistage IN,
Crossbar Switch

P P ... P

Characteristics

All processors have equally direct access to one


large memory address space
Example systems

- Bus and cache-based systems: Sequent Balance, Encore Multimax


- Multistage IN-based systems: Ultracomputer, Butterfly, RP3, HEP
- Crossbar switch-based systems: C.mmp, Alliant FX/8
Limitations

Memory access latency; Hot spot problem

Computer Organization Computer Architecture


Multiprocessors 19 Characteristics of Multiprocessors
1
MESSAGE-PASSING MULTIPROCESSORS
Message-Passing Network Point-to-point connections

P P ... P

M M ... M

Characteristics

- Interconnected computers
- Each processor has its own memory, and
communicate via message-passing

Example systems

- Tree structure: Teradata, DADO


- Mesh-connected: Rediflow, Series 2010, J-Machine
- Hypercube: Cosmic Cube, iPSC, NCUBE, FPS T Series, Mark III

Limitations

- Communication overhead; Hard to programming


Computer Organization Computer Architecture
Multiprocessors 19 Interconnection Structure
2
INTERCONNECTION STRUCTURES

* Time-Shared Common Bus


* Multiport Memory
* Crossbar Switch
* Multistage Switching Network
* Hypercube System

Bus
All processors (and memory) are connected to a
common bus or busses
- Memory access is fairly uniform, but not very scalable

Computer Organization Computer Architecture


Multiprocessors 19 Interconnection Structure
3
BUS
- A collection of signal lines that carry module-to-module communication
- Data highways connecting several digital system elements
Operations of Bus
Devices
M3 S7 M6 S5 M4
S2

Bus

M3 wishes to communicate with S5


[1] M3 sends signals (address) on the bus that causes
S5 to respond
[2] M3 sends data to S5 or S5 sends data to
M3(determined by the command line)

Master Device: Device that initiates and controls the communication


Slave Device: Responding device
Multiple-master buses
-> Bus conflict
-> need bus arbitration

Computer Organization Computer Architecture


Multiprocessors 19 Interconnection Structure
4
SYSTEM BUS STRUCTURE FOR MULTIPROCESSORS

Local Bus

Common System Local


Shared Bus CPU IOP
Memory
Memory Controller

SYSTEM BUS

System Local System Local


Bus CPU IOP Bus CPU
Memory Memory
Controller Controller

Local Bus Local Bus

Computer Organization Computer Architecture


Multiprocessors 19 Interconnection Structure
5
MULTIPORT MEMORY
Multiport Memory Module
- Each port serves a CPU

Memory Module Control Logic


- Each memory module has control logic
- Resolve memory module conflicts Fixed priority among CPUs

Advantages
- Multiple paths -> high transfer rate
Memory Modules
Disadvantages
MM 1 MM 2 MM 3 MM 4
- Memory control logic
- Large number of cables and
connections
CPU 1

CPU 2

CPU 3

CPU 4

Computer Organization Computer Architecture


Multiprocessors 19 Interconnection Structure
6
CROSSBAR SWITCH
Memory modules

MM1 MM2 MM3 MM4

CPU1

CPU2

CPU3

CPU4

Block Diagram of Crossbar Switch

} data,address, and
control from CPU 1
data

Memory
address
Multiplexers
and } data,address, and
control from CPU 2
Module arbitration
R/W
logic
memory
enable
} data,address, and
control from CPU 3

} data,address, and
control from CPU 4

Computer Organization Computer Architecture


Multiprocessors 19 Interconnection Structure
7
MULTISTAGE SWITCHING NETWORK

Interstage Switch

0 0
A A

1 1
B B

A connected to 0 A connected to 1

0 0
A A

1 1
B B

B connected to 0 B connected to 1

Computer Organization Computer Architecture


Multiprocessors 19 Interconnection Structure
8
MULTISTAGE INTERCONNECTION NETWORK
Binary Tree with 2 x 2 Switches 0
000
0 1
001
1
0
010
P1 0
1
1 011
P2
0
100
0
1
1 101

0
110
1
111
8x8 Omega Switching Network
0 000
1 001

2 010
3 011

4 100
5 101

6 110
7 111

Computer Organization Computer Architecture


Multiprocessors 19 Interconnection Structure
9
HYPERCUBE INTERCONNECTION

n-dimensional hypercube (binary n-cube)

- p = 2n
- processors are conceptually on the corners of a
n-dimensional hypercube, and each is directly
connected to the n neighboring nodes
- Degree = n
011 111

010
0 01 11 110

101
001

1 00 10 100
000

One-cube Two-cube Three-cube

Computer Organization Computer Architecture


Multiprocessors 20 Interprocessor Arbitration
0
INTERPROCESSOR ARBITRATION
Bus
Board level bus
Backplane level bus
Interface level bus

System Bus - A Backplane level bus

- Printed Circuit Board


- Connects CPU, IOP, and Memory
- Each of CPU, IOP, and Memory board can be
plugged into a slot in the backplane(system bus)
- Bus signals are grouped into 3 groups
e.g. IEEE standard 796 bus
Data, Address, and Control(plus power) - 86 lines
Data: 16(multiple of 8)
Address: 24
Control: 26
Power: 20
- Only one of CPU, IOP, and Memory can be
granted to use the bus at a time
- Arbitration mechanism is needed to handle
multiple requests
Computer Organization Computer Architecture
Multiprocessors 20 Interprocessor Arbitration
1
SYNCHRONOUS & ASYNCHRONOUS DATA TRANSFER
Synchronous Bus
Each data item is transferred over a time slice
known to both source and destination unit
- Common clock source
- Or separate clock and synchronization signal
is transmitted periodically to synchronize
the clocks in the system

Asynchronous Bus

* Each data item is transferred by Handshake


mechanism
- Unit that transmits the data transmits a control
signal that indicates the presence of data
- Unit that receiving the data responds with
another control signal to acknowledge the
receipt of the data

* Strobe pulse - supplied by one of the units to


indicate to the other unit when the data transfer
has to occur

Computer Organization Computer Architecture


Multiprocessors 20 Interprocessor Arbitration
2
BUS SIGNALS
- address
- data
Bus signal allocation - control
- arbitration
- interrupt
- timing
- power, ground

IEEE Standard 796 Multibus Signals


Data and address
Data lines (16 lines) DATA0 - DATA15
Address lines (24 lines) ADRS0 - ADRS23
Data transfer
Memory read MRDC
Memory write MWTC
IO read IORC
IO write IOWC
Transfer acknowledge TACK (XACK)
Interrupt control
Interrupt request INT0 - INT7
interrupt acknowledge INTA

Computer Organization Computer Architecture


Multiprocessors 20 Interprocessor Arbitration
3
BUS SIGNALS

IEEE Standard 796 Multibus Signals (Cont’d)

Miscellaneous control
Master clock CCLK
System initialization INIT
Byte high enable BHEN
Memory inhibit (2 lines) INH1 - INH2
Bus lock LOCK
Bus arbitration
Bus request BREQ
Common bus request CBRQ
Bus busy BUSY
Bus clock BCLK
Bus priority in BPRN
Bus priority out BPRO
Power and ground (20 lines)

Computer Organization Computer Architecture


Multiprocessors 20 Interprocessor Arbitration
4
INTERPROCESSOR ARBITRATION STATIC ARBITRATION

Serial Arbitration Procedure


Highest
priority
To next
arbiter
1 PI Bus PO PI Bus PO PI Bus PO PI Bus PO
arbiter 1 arbiter 2 arbiter 3 arbiter 4

Bus busy line

Parallel Arbitration Procedure


Bus Bus Bus Bus
arbiter 1 arbiter 2 arbiter 3 arbiter 4
Ack Req Ack Req Ack Req Ack Req

Bus busy line

4x2
Priority encoder

2x4
Decoder

Computer Organization Computer Architecture


Multiprocessors 20 Interprocessor Arbitration
5
INTERPROCESSOR ARBITRATION DYNAMIC ARBITRATION

Priorities of the units can be dynamically changeable


while the system is in operation

Time Slice
Fixed length time slice is given sequentially to
each processor, round-robin fashion

Polling
Unit address polling - Bus controller advances
the address to identify the requesting unit

LRU

FIFO

Rotating Daisy Chain


Conventional Daisy Chain - Highest priority to the
nearest unit to the bus controller
Rotating Daisy Chain - Highest priority to the unit
that is nearest to the unit that has
most recently accessed the bus(it
becomes the bus controller)
Computer Organization Computer Architecture
Multiprocessors 20 Interprocessor Communication and Synchronization
6
INTERPROCESSOR COMMUNICATION
Interprocessor Communication Shared Memory
Receiving
Processor
Communication Area
Sending
Processor Mark
Receiver(s) Receiving
Processor
Message
..
.

Receiving
Processor

Interrupt

Shared Memory
Receiving
Processor
Sending Communication Area
Processor
Instruction Mark
Receiver(s) Receiving
Processor
Message
..
.

Receiving
Processor

Computer Organization Computer Architecture


Multiprocessors 20 Interprocessor Communication and Synchronization
7
INTERPROCESSOR SYNCHRONIZATION
Synchronization
Communication of control information between processors
- To enforce the correct sequence of processes
- To ensure mutually exclusive access to shared writable data

Hardware Implementation

Mutual Exclusion with a Semaphore


Mutual Exclusion
- One processor to exclude or lock out access to shared resource by
other processors when it is in a Critical Section
- Critical Section is a program sequence that,
once begun, must complete execution before
another processor accesses the same shared resource

Semaphore
- A binary variable
- 1: A processor is executing a critical section,
that not available to other processors
0: Available to any requesting processor
- Software controlled Flag that is stored in
memory that all processors can be access

Computer Organization Computer Architecture


Multiprocessors 20 Interprocessor Communication and Synchronization
8
SEMAPHORE
Testing and Setting the Semaphore

- Avoid two or more processors test or set the same semaphore


- May cause two or more processors enter the
same critical section at the same time
- Must be implemented with an indivisible operation

R <- M[SEM] / Test semaphore /


M[SEM] <- 1 / Set semaphore /

These are being done while locked, so that other processors cannot test
and set while current processor is being executing these instructions

If R=1, another processor is executing the


critical section, the processor executed
this instruction does not access the
shared memory

If R=0, available for access, set the semaphore to 1 and access

The last instruction in the program must clear the semaphore

Computer Organization Computer Architecture


Multiprocessors 20 Cache Coherence
9
CACHE COHERENCE
Caches are Coherent X = 52 Main memory

Bus

X = 52 X = 52 X = 52 Caches

P1 P2 P3 Processors

Cache Incoherency in X = 120 Main memory


Write Through Policy
Bus

X = 120 X = 52 X = 52 Caches

P1 P2 P3 Processors

Cache Incoherency in Write Back Policy X = 52 Main memory

Bus

X = 120 X = 52 X = 52 Caches

P1 P2 P3 Processors

Computer Organization Computer Architecture


Multiprocessors 21 Cache Coherence
0
MAINTAINING CACHE COHERENCY
Shared Cache
- Disallow private cache
- Access time delay

Software Approaches
* Read-Only Data are Cacheable
- Private Cache is for Read-Only data
- Shared Writable Data are not cacheable
- Compiler tags data as cacheable and noncacheable
- Degrade performance due to software overhead

* Centralized Global Table


- Status of each memory block is maintained in CGT: RO(Read-Only); RW(Read and Write)
- All caches can have copies of RO blocks
- Only one cache can have a copy of RW block

Hardware Approaches
* Snoopy Cache Controller
- Cache Controllers monitor all the bus requests from CPUs and IOPs
- All caches attached to the bus monitor the write operations
- When a word in a cache is written, memory is also updated (write through)
- Local snoopy controllers in all other caches check their memory to determine if they have
a copy of that word; If they have, that location is marked invalid(future reference to
this location causes cache miss)

Computer Organization Computer Architecture


Multiprocessors 21 Parallel Computing
1
PARALLEL COMPUTING
Grosche’s Law

Grosch’s Law states that the speed of computers is proportional to the


square of their cost. Thus if you are looking for a fast computer, you are
better off spending your money buying one large computer than two
small computers and connecting them.
Grosch’s Law is true within classes of computers, but not true between
classes. Computers may be priced according to Groach’s Law, but the
Law cannot be true asymptotically.

Minsky’s Conjecture

Minsky’s conjecture states that the speedup achievable


by a parallel computer increases as the logarithm of the
number of processing elements,thus making large-scale
parallelism unproductive.
Many experimental results have shown linear speedup for over
100 processors.

Computer Organization Computer Architecture


Multiprocessors 21 Parallel Computing
2
PARALLEL COMPUTING
History

History tells us that the speed of traditional single CPU


Computers has increased 10 folds every 5 years.
Why should great effort be expended to devise a parallel
computer that will perform tasks 10 times faster when,
by the time the new architecture is developed and
implemented, single CPU computers will be just as fast.
Utilizing parallelism is better than waiting.

Amdahl’s Law

A small number of sequential operations can effectively


limit the speedup of a parallel algorithm.
Let f be the fraction of operations in a computation that must be performed sequentially,
where 0 < f < 1. Then the maximum speedup S achievable by a parallel computer with p processors
performing the computation is S < 1 / [f + (1 - f) / p]. For example, if 10% of the computation must be
performed sequentially, then the maximum speedup achievable is 10, no matter how many
processors a parallel computer has.

There exist some parallel algorithms with almost no sequential operations. As the problem size(n)
increases, f becomes smaller (f -> 0 as n->∞). In this case, lim S = p.
n→∞

Computer Organization Computer Architecture


Multiprocessors 21 Parallel Computing
3
PARALLEL COMPUTING

Pipelined Computers are Sufficient

Most supercomputers are vector computers, and most of the successes


attributed to supercomputers have accomplished on pipelined vector
processors, especially Cray=1 and Cyber-205.
If only vector operations can be executed at high speed, supercomputers
will not be able to tackle a large number of important problems. The
latest supercomputers incorporate both pipelining and high level
parallelism (e.g., Cray-2)

Software Inertia

Billions of dollars worth of FORTRAN software exists.


Who will rewrite them? Virtually no programmers have
any experience with a machine other than a single CPU
computer. Who will retrain them ?

Computer Organization Computer Architecture


Multiprocessors 21 Interconnection Structure
4
INTERCONNECTION NETWORKS

Switching Network (Dynamic Network)


Processors (and Memory) are connected to routing
switches like in telephone system
- Switches might have queues(combining logic),
which improve functionality but increase latency
- Switch settings may be determined by message
headers or preset by controller
- Connections can be packet-switched or circuit-
switched(remain connected as long as it is needed)
- Usually NUMA, blocking, often scalable and upgradable

Point-Point (Static Network)


Processors are directly connected to only certain other processors and
must go multiple hops to get to additional processors

- Usually distributed memory


- Hardware may handle only single hops, or multiple hops
- Software may mask hardware limitations
- Latency is related to graph diameter, among many other factors
- Usually NUMA, nonblocking, scalable, upgradable
- Ring, Mesh, Torus, Hypercube, Binary Tree
Computer Organization Computer Architecture
Multiprocessors 21 Interconnection Structure
5
INTERCONNECTION NETWORKS

Multistage Interconnect

Switch Processor

Bus

Computer Organization Computer Architecture


Multiprocessors 21 Interconnection Structure
6
INTERCONNECTION NETWORKS

Static Topology - Direct Connection

- Provide a direct inter-processor communication path


- Usually for distributed-memory multiprocessor

Dynamic Topology - Indirect Connection

- Provide a physically separate switching network


for inter-processor communication
- Usually for shared-memory multiprocessor

Direct Connection
Interconnection Network

A graph G(V,E)
V: a set of processors (nodes)
E: a set of wires (edges)

Performance Measures: - degree, diameter, etc

Computer Organization Computer Architecture


Multiprocessors 21 Interconnection Structure
7
INTERCONNECTION NETWORKS
Complete connection

- Every processor is directly connected to every other processors


- Diameter = 1, Degree = p - 1
- # of wires = p ( p - 1 ) / 2; dominant cost
- Fan-in/fanout limitation makes it impractical for large p
- Interesting as a theoretical model because algorithm bounds for this
model are automatically lower bounds for all direct connection machines

Ring

- Degree = 2, (not a function of p)


- Diameter = ⎣ p/2 ⎦

Computer Organization Computer Architecture


Multiprocessors 21 Interconnection Structure
8
INTERCONNECTION NETWORKS

• 2-Mesh
m
...

m 2
m =p

...

- Degree = 4
- Diameter = 2(m - 1)
- In general, an n-dimensional mesh has
diameter = d ( p1/n - 1)
- Diameter can be halved by having wrap-around
connections (-> Torus)
- Ring is a 1-dimensional mesh with wrap-around
connection
Computer Organization Computer Architecture
Multiprocessors 21 Interconnection Structure
9
INTERCONNECTION NETWORK

Binary Tree

- Degree = 3
p+1
- Diameter = 2 log
2

Computer Organization Computer Architecture


Multiprocessors 22 Interconnection Structure
0
MIN SPACE

MIN
Banyan network
=(unique path network) Multiple Path Network

Delta network [Patel81] PM2I network

• Data Manipulator
• Baseline [Wu80] [Feng74]
• Flip [Batcher76] • Augmented DM
• Indirect binary [Siegel78]
n-cube [Peas77] • Inverse ADM
• Omega [Lawrie75] [Siegel79]
• Regular SW banyan • Gamma [Parker84]
[Goke73]

• Extra stage Cube


[Adams82]
• Replicated/Dialted
Delta netork
[Kruskal83]
• B-delta [Yoon88]

Permutation/Sorting Network
(N!)
• Clos network [53]
• Benes network [62]
• Batcher sorting
network [68]

Computer Organization Computer Architecture


Multiprocessors 22
1
SOME CURRENT PARALLEL COMPUTERS
DM-SIMD
• AMT DAP
• Goodyear MPP
• Thinking Machines CM series
• MasPar MP1
• IBM GF11
SM-MIMD
• Alliant FX
• BBN Butterfly
• Encore Multimax
• Sequent Balance/Symmetry
• CRAY 2, X-MP, Y-MP
• IBM RP3
• U. Illinois CEDAR
DM-MIMD
• Intel iPSC series, Delta machine
• NCUBE series
• Meiko Computing Surface
• Carnegie-Mellon/ Intel iWarp

Computer Organization Computer Architecture

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy