Rfghj
Rfghj
CA-1
Topic : Pipelining control Hazards
Name : Sayan Das
Roll No : 10200223039
Reg No : 231020110191 (2023-24)
Stream : Information Technology
Sem : 4th Sem
Paper : Computer Architecture
Paper Code : (PCC-CS 402)
Context
1. Introduction to Pipelining
2. Stages of Instruction Pipelining
3. Types of Hazards in Pipelining
4. Understanding Control Hazards
5. Effects of Control Hazards
6. Techniques to Handle Control Hazards
7. Stalling the Pipeline
8. Branch Prediction
9. Branch Prediction Buffer
10. Delayed Branching
11. Advanced Branch Prediction
12. Pipeline Flushing
13. Speculative Execution
14. Performance Comparison
15. Conclusion and References
Introduction to Pipelining
Pipelining in Computer Architecture
Pipelining is a technique used in modern CPUs to enhance execution
speed by enabling parallelism. Instead of executing instructions
sequentially, pipelining divides the instruction cycle into multiple stages,
allowing different instructions to be processed simultaneously at
different execution phases. This improves instruction throughput and
optimizes CPU resource utilization by reducing idle time, significantly
boosting overall performance.
Stages of Instruction Pipelining
•Instruction Fetch (IF): The CPU fetches the instruction from memory and
loads it into the instruction register.
•Instruction Decode (ID): The fetched instruction is then decoded to determine
the required operation and identify the operands. This stage prepares the
instruction for execution by extracting necessary information.
•Execute (EX): The actual operation is performed, such as arithmetic or logical
computation using the ALU (Arithmetic Logic Unit).
•Memory Access (MEM): If the instruction involves reading or writing to memory
(such as load/store operations), the processor accesses the memory to fetch or
store data.
•Write Back (WB): The final result of the instruction is written back to registers,
ensuring that the computed data is available for subsequent instructions.
Types of Hazards in Pipelining
1. Structural Hazards
Definition: Structural hazards occur when multiple instructions
require access to the same hardware resource at the same time,
leading to conflicts.
2. Data Hazards
Definition: Data hazards occur when instructions depend on the
results of previous instructions that are still being processed in the
pipeline.
Types of Data Hazards:
1. Read After Write (RAW) - True Dependency:
An instruction tries to read a register before a previous instruction has
written its result.
2. Write After Read (WAR) - Anti-Dependency:
• An instruction writes to a register before a previous instruction has
read it.
3. Write After Write (WAW) - Output Dependency:
• Two instructions write to the same register in an incorrect order.
3. Control Hazards
Definition: Control hazards occur when the pipeline makes
incorrect assumptions about control flow, such as branch
instructions (if-else, loops, function calls).
Understanding Control
Hazards
What are Control Hazards?
Control hazards, also known as branch hazards, occur in
pipelined processors when the flow of instruction execution is
altered by branch instructions (e.g., jumps, loops, and
conditional statements). These hazards arise because the
processor does not know in advance whether a branch will be
taken or not, leading to incorrect instruction fetching and
execution delays.
Why Do Control Hazards Occur?
Control hazards happen due to the delay in resolving the
outcome of branch instructions. Since the CPU fetches
instructions in advance (instruction prefetching), it might fetch
the wrong instructions when a branch decision changes the
program flow.
Effects of Control Hazards
Pipeline Stalls (Bubble Insertion)
Incorrect Instruction Execution (Misprediction)
Pipeline Flushing (Discarding Instructions)
Increased CPI (Cycles Per Instruction)
Wasted CPU Resources and Power
Consumption
Techniques to Handle
Control Hazards
1. Stalling the pipeline
2. Branch prediction
3. Delay slots
Stalling the Pipeline
The simplest method where the CPU pauses
instruction fetching until the branch decision is
resolved.
Ensures correct execution but wastes CPU cycles.
Advantage: Simple to implement.
Disadvantage: Reduces performance due to idle
pipeline stages.
Branch Prediction
The processor predicts the branch outcome and continues fetching
instructions accordingly.
If the prediction is correct → Execution continues smoothly.
If incorrect → The incorrect instructions are discarded (pipeline flush).
Advantage: Reduces stalls and improves efficiency.
Disadvantage: Mispredictions cause wasted CPU cycles.
Types of Branch Prediction:
Static Prediction (fixed rule-based)
1. Always assumes a branch is taken or not taken.
2. Works well for simple loops but not for dynamic conditions.
Dynamic Prediction (history-based)
1. Uses past execution patterns to predict future branches (e.g.,
2-bit predictor).
2. Used in modern CPUs for higher accuracy.
Branch Prediction Buffers
The processor rearranges instructions so useful operations are
executed before the branch takes effect.
Requires compiler support to reorder instructions.
Advantage: Utilizes CPU cycles effectively.
Disadvantage: Not always possible if instructions depend on
branch results.
Delayed Branching
Delayed branching is a technique used to reduce control
hazards in pipelined processors by executing one or more
instructions after a branch instruction, regardless of whether the
branch is taken or not. This ensures that useful work is done while
waiting for the branch decision, minimizing pipeline stalls.
Advantages of Delayed Branching
References:
Computer Organization and Design – David A. Patterson & John L. Hennessy
Online Tutorials and Documentation
GeeksforGeeks: https://www.geeksforgeeks.org/
TutorialsPoint: https://www.tutorialspoint.com/
THANK YOU