CA Examen
CA Examen
Performance Metrics:
● Clock Speed: Determines how fast the processor operates (in GHz). Faster
clock speeds = more instructions processed.
● CPI (Cycles Per Instruction): Measures average clock cycles required per
instruction. Lower CPI = better performance.
● Throughput vs. Latency:
○ Throughput: Number of tasks completed per unit time.
○ Latency: Time taken to complete a single task.
● Key Features:
○ Single memory stores both instructions and data.
○ Executes instructions sequentially using fetch-decode-execute cycle.
● Components:
○ CPU: Fetches instructions, decodes them, executes operations.
○ Memory: Stores data and instructions.
○ I/O Devices: Communicate with the external world.
○ Bus: Transfers data between CPU, memory, and I/O.
Combinational Circuits:
● Circuits where the output depends only on current inputs (no memory).
● Examples:
○ Adders (Half, Full), Subtractors, Multiplexers, Demultiplexers.
● Half-Adder:
○ Adds two binary digits AAA and BBB.
○ Outputs:
■ Sum (Sum) = A⊕BA \oplus BA⊕B.
■ Carry (Cout) = A⋅BA \cdot BA⋅B.
● Full-Adder:
○ Adds AAA, BBB, and a Carry-In (CinC_{in}Cin ).
○ Outputs:
■ Sum = A⊕B⊕CinA \oplus B \oplus C_{in}A⊕B⊕Cin .
■ Carry-Out = (A⋅B)+(Cin⋅(A⊕B))(A \cdot B) + (C_{in} \cdot (A
\oplus B))(A⋅B)+(Cin ⋅(A⊕B)).
● Multiplexer:
○ Selects one input from N inputs based on log2N\log_2 Nlog2 N control
signals.
○ Example: 4:1 MUX has 4 inputs, 2 control lines, and 1 output.
● Demultiplexer:
○ Routes a single input to one of N outputs based on control signals.
● Decoder:
○ Converts binary inputs into one-hot outputs.
○ Example: 2-to-4 Decoder takes 2-bit binary input and enables 1 of 4
outputs.
● Encoder:
○ Converts one-hot inputs into binary outputs.
Sequential Circuits:
Flip-Flops:
Counters:
Verilog Basics:
Verification Techniques:
● Definition: A computing model where instructions and data share the same
memory space and are executed sequentially.
● Key Components:
○ CPU (Central Processing Unit): Fetches, decodes, and executes
instructions.
○ Memory: Stores both program instructions and data.
○ I/O Devices: Enable communication with external peripherals.
○ Bus: Connects CPU, memory, and I/O devices for data transfer.
● Fetch-Decode-Execute Cycle:
○ Fetch: Load the next instruction from memory into the CPU.
○ Decode: Interpret the instruction and determine necessary operations.
○ Execute: Perform the operation, possibly accessing memory or I/O.
Instruction Types:
Addressing Modes:
Microarchitecture:
Pipelining:
● Definition: A technique that divides instruction execution into stages,
allowing multiple instructions to be processed simultaneously.
● Stages:
○ Fetch: Load the instruction from memory.
○ Decode: Interpret the instruction.
○ Execute: Perform the operation.
○ Memory Access: Read/write data from/to memory.
○ Write Back: Store the result in a register.
● Benefits: Increased throughput by executing multiple instructions in parallel.
Types of Memory:
Memory Hierarchy:
Cache Basics:
● Definition: A small, fast memory that stores frequently used data for quick
access.
● Cache Hits and Misses:
○ Hit: Data is found in the cache.
○ Miss: Data is not in the cache, requiring access to slower memory.
● Least Recently Used (LRU): Replaces the least recently accessed block.
● First In, First Out (FIFO): Replaces the oldest block.
● Random: Randomly selects a block for replacement.
Write Policies:
Multilevel Caches:
● Definition: Modern processors use multiple levels of cache (L1, L2, L3).
● Purpose: Reduce latency by organizing data closer to the CPU.
Prefetching:
1. FIFO (First In, First Out): Replaces the oldest page.
2. Optimal: Replaces the page that won’t be used for the longest time.
3. LRU (Least Recently Used): Replaces the least recently accessed page.