0% found this document useful (0 votes)
6 views10 pages

CA Examen

The document provides an overview of computer architecture, detailing its definition, key components, abstraction levels, and performance metrics. It covers various logic designs, including combinational and sequential circuits, as well as hardware description languages like Verilog. Additionally, it discusses memory organization, caching techniques, and virtual memory management, highlighting the importance of efficient data handling in computer systems.

Uploaded by

Musanif Tairi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views10 pages

CA Examen

The document provides an overview of computer architecture, detailing its definition, key components, abstraction levels, and performance metrics. It covers various logic designs, including combinational and sequential circuits, as well as hardware description languages like Verilog. Additionally, it discusses memory organization, caching techniques, and virtual memory management, highlighting the importance of efficient data handling in computer systems.

Uploaded by

Musanif Tairi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

L1: Introduction and Basics

What is Computer Architecture?

●​ Definition: Computer Architecture is the study of the design, organization,


and functionality of computer systems. It focuses on how hardware and
software interact to execute programs efficiently.
●​ Key Components:
○​ Processor (CPU): Executes instructions using control, arithmetic logic
unit (ALU), and registers.
○​ Memory: Stores data and instructions (RAM for temporary, ROM for
permanent storage).
○​ Input/Output (I/O): Devices for communication with the external world
(e.g., keyboards, monitors).

Abstraction Levels in Computing:

1.​ Digital Logic Level: Basic gates and circuits.


2.​ Microarchitecture: Internal structure of the processor.
3.​ Instruction Set Architecture (ISA): The programming interface between
hardware and software.
4.​ Operating System Level: Manages resources like memory, I/O, and
processes.
5.​ Application Level: Software that runs on the system.

Performance Metrics:

●​ Clock Speed: Determines how fast the processor operates (in GHz). Faster
clock speeds = more instructions processed.
●​ CPI (Cycles Per Instruction): Measures average clock cycles required per
instruction. Lower CPI = better performance.
●​ Throughput vs. Latency:
○​ Throughput: Number of tasks completed per unit time.
○​ Latency: Time taken to complete a single task.

Von Neumann Architecture:

●​ Key Features:
○​ Single memory stores both instructions and data.
○​ Executes instructions sequentially using fetch-decode-execute cycle.
●​ Components:
○​ CPU: Fetches instructions, decodes them, executes operations.
○​ Memory: Stores data and instructions.
○​ I/O Devices: Communicate with the external world.
○​ Bus: Transfers data between CPU, memory, and I/O.

L2b: Combinational Logic I

Basics of Logic Gates:

●​ AND Gate: Outputs 1 if both inputs are 1.


●​ OR Gate: Outputs 1 if any input is 1.
●​ NOT Gate: Inverts the input (0 → 1, 1 → 0).
●​ Derived Gates: NAND (NOT + AND), XOR (exclusive OR), etc.

Combinational Circuits:

●​ Circuits where the output depends only on current inputs (no memory).
●​ Examples:
○​ Adders (Half, Full), Subtractors, Multiplexers, Demultiplexers.

Half-Adder and Full-Adder:

●​ Half-Adder:
○​ Adds two binary digits AAA and BBB.
○​ Outputs:
■​ Sum (Sum) = A⊕BA \oplus BA⊕B.
■​ Carry (Cout) = A⋅BA \cdot BA⋅B.
●​ Full-Adder:
○​ Adds AAA, BBB, and a Carry-In (CinC_{in}Cin ).
○​ Outputs:
■​ Sum = A⊕B⊕CinA \oplus B \oplus C_{in}A⊕B⊕Cin .
■​ Carry-Out = (A⋅B)+(Cin⋅(A⊕B))(A \cdot B) + (C_{in} \cdot (A
\oplus B))(A⋅B)+(Cin ⋅(A⊕B)).

L3: Combinational Logic II


Multiplexers (MUX) and Demultiplexers (DEMUX):

●​ Multiplexer:
○​ Selects one input from N inputs based on log⁡2N\log_2 Nlog2 N control
signals.
○​ Example: 4:1 MUX has 4 inputs, 2 control lines, and 1 output.
●​ Demultiplexer:
○​ Routes a single input to one of N outputs based on control signals.

Decoders and Encoders:

●​ Decoder:
○​ Converts binary inputs into one-hot outputs.
○​ Example: 2-to-4 Decoder takes 2-bit binary input and enables 1 of 4
outputs.
●​ Encoder:
○​ Converts one-hot inputs into binary outputs.

Karnaugh Maps (K-Maps):

●​ Purpose: Simplify Boolean expressions for minimized circuits.


●​ How It Works:
○​ Arrange truth table values in a grid.
○​ Combine adjacent 1s into groups (powers of 2).

L4: Sequential Logic Design I

Sequential Circuits:

●​ Unlike combinational circuits, sequential circuits depend on current inputs and


past states (have memory).

Flip-Flops:

●​ SR Flip-Flop: Two inputs (Set, Reset); output changes based on inputs.


●​ D Flip-Flop: Single data input; stores value on clock edge.
●​ JK Flip-Flop: Combines SR functionality; toggles when J=K=1J = K =
1J=K=1.
●​ T Flip-Flop: Toggles the output on every clock pulse.
Registers:

●​ Groups of Flip-Flops used to store multiple bits.


●​ Types:
○​ Shift Registers: Shift data in/out serially.
○​ Parallel Registers: Store multiple bits simultaneously.

L5a: Sequential Logic Design II

Counters:

●​ Sequential circuits for counting pulses.


●​ Types:
○​ Asynchronous Counters: Each Flip-Flop triggered by the previous
stage.
○​ Synchronous Counters: All Flip-Flops triggered simultaneously.

Finite State Machines (FSMs):

●​ Moore Machine: Outputs depend only on current state.


●​ Mealy Machine: Outputs depend on state and inputs.

L5b: Hardware Description Languages and Verilog

Verilog Basics:

●​ Module: Basic unit of design.


●​ Key Constructs:
○​ always block: Behavioral modeling.
○​ assign statement: Continuous assignment.

Structural vs. Behavioral Modeling:

●​ Structural: Focuses on connections between components.


●​ Behavioral: Describes functionality algorithmically.

L6b: Timing and Verification


Timing Concepts:

●​ Propagation Delay: Time taken for a change in input to reflect on output.


●​ Setup Time: Minimum time before a clock edge that data must be stable.
●​ Hold Time: Minimum time after a clock edge that data must remain stable.

Verification Techniques:

●​ Simulation: Test designs using various scenarios.


●​ Static Timing Analysis: Analyze timing paths to ensure no violations.

L7: Von Neumann Model & Instruction Set Architectures

Von Neumann Model:

●​ Definition: A computing model where instructions and data share the same
memory space and are executed sequentially.
●​ Key Components:
○​ CPU (Central Processing Unit): Fetches, decodes, and executes
instructions.
○​ Memory: Stores both program instructions and data.
○​ I/O Devices: Enable communication with external peripherals.
○​ Bus: Connects CPU, memory, and I/O devices for data transfer.
●​ Fetch-Decode-Execute Cycle:
○​ Fetch: Load the next instruction from memory into the CPU.
○​ Decode: Interpret the instruction and determine necessary operations.
○​ Execute: Perform the operation, possibly accessing memory or I/O.

Instruction Set Architecture (ISA):

●​ Definition: The interface between hardware and software, defining how


instructions are executed on a processor.
●​ Key Components:
○​ Registers: Small, fast storage inside the CPU.
○​ Instructions: Operations the CPU can execute (e.g., arithmetic, logic,
data movement).
○​ Addressing Modes: Methods for accessing data (e.g., direct, indirect).
L8: Instruction Set Architectures II

Instruction Types:

1.​ Arithmetic: Perform mathematical operations (e.g., ADD, SUB).


2.​ Logical: Perform bitwise operations (e.g., AND, OR, NOT).
3.​ Data Transfer: Move data between registers and memory (e.g., LOAD,
STORE).
4.​ Control: Direct program flow (e.g., JUMP, CALL, RETURN).

RISC vs. CISC:

●​ RISC (Reduced Instruction Set Computer):


○​ Simple instructions, executed in one clock cycle.
○​ Larger programs but faster execution.
●​ CISC (Complex Instruction Set Computer):
○​ Complex instructions, may take multiple cycles.
○​ Smaller programs but slower execution.

Addressing Modes:

1.​ Immediate: Operand is a constant within the instruction.


2.​ Register: Operand is in a register.
3.​ Direct: Operand is in memory at a specific address.
4.​ Indirect: Address points to another address containing the operand.
5.​ Indexed: Combines a base address with an offset.

L9a: ISA & Microarchitecture

Microarchitecture:

●​ Definition: The implementation of the ISA using hardware components.


●​ Key Components:
○​ ALU (Arithmetic Logic Unit): Executes arithmetic and logical
operations.
○​ Control Unit: Decodes instructions and controls data flow.
○​ Registers: Temporary storage for data and instructions.

Pipelining:
●​ Definition: A technique that divides instruction execution into stages,
allowing multiple instructions to be processed simultaneously.
●​ Stages:
○​ Fetch: Load the instruction from memory.
○​ Decode: Interpret the instruction.
○​ Execute: Perform the operation.
○​ Memory Access: Read/write data from/to memory.
○​ Write Back: Store the result in a register.
●​ Benefits: Increased throughput by executing multiple instructions in parallel.

L9b: Assembly Programming

Assembly Language Basics:

●​ Definition: A low-level programming language that maps closely to machine


code.
●​ Instructions:
○​ MOV: Transfers data between registers or memory.
○​ ADD, SUB: Perform arithmetic operations.
○​ JMP: Jump to another instruction.

Registers and Memory Access:

●​ General-Purpose Registers: Small, fast storage areas (e.g., AX, BX in x86).


●​ Stack: Memory structure for storing temporary data (uses PUSH and POP).
●​ Memory Access: Uses addressing modes like immediate, direct, or indirect.

L21: Memory Organization & Technology

Types of Memory:

1.​ Volatile Memory:


a.​ Loses data when power is turned off.
b.​ Examples: RAM (Random Access Memory).
2.​ Non-Volatile Memory:
a.​ Retains data even without power.
b.​ Examples: ROM, Flash Memory.
SRAM vs. DRAM:

●​ SRAM (Static RAM):


○​ Faster and more expensive.
○​ Used in caches.
●​ DRAM (Dynamic RAM):
○​ Slower but cheaper.
○​ Used for main memory.

Memory Hierarchy:

1.​ Registers (fastest, smallest).


2.​ Cache (L1, L2, L3).
3.​ Main Memory (RAM).
4.​ Secondary Storage (hard drives, SSDs).

L22: Memory Hierarchy and Caches

Cache Basics:

●​ Definition: A small, fast memory that stores frequently used data for quick
access.
●​ Cache Hits and Misses:
○​ Hit: Data is found in the cache.
○​ Miss: Data is not in the cache, requiring access to slower memory.

Cache Mapping Techniques:

1.​ Direct Mapping: Each block maps to a specific cache location.


2.​ Associative Mapping: A block can map to any cache location.
3.​ Set-Associative Mapping: Combines direct and associative mapping.

L23: Cache Design and Management

Cache Replacement Policies:

●​ Least Recently Used (LRU): Replaces the least recently accessed block.
●​ First In, First Out (FIFO): Replaces the oldest block.
●​ Random: Randomly selects a block for replacement.

Write Policies:

1.​ Write-Through: Writes data to both cache and memory immediately.


2.​ Write-Back: Writes data to memory only when replaced in the cache.

L24a: Advanced Caching

Multilevel Caches:

●​ Definition: Modern processors use multiple levels of cache (L1, L2, L3).
●​ Purpose: Reduce latency by organizing data closer to the CPU.

Prefetching:

●​ Definition: Predictively loading data into the cache before it is needed.


●​ Benefit: Reduces cache miss penalties.

L25b: Virtual Memory

Virtual Memory Basics:

●​ Definition: A memory management system that provides the illusion of a


large, contiguous memory space.
●​ Concepts:
○​ Pages: Fixed-size blocks of memory.
○​ Page Tables: Map virtual addresses to physical addresses.

Page Replacement Algorithms:

1.​ FIFO (First In, First Out): Replaces the oldest page.
2.​ Optimal: Replaces the page that won’t be used for the longest time.
3.​ LRU (Least Recently Used): Replaces the least recently accessed page.

L26a: Virtual Memory II


Translation Lookaside Buffer (TLB):

●​ Definition: A cache for page table entries, speeding up virtual-to-physical


address translation.
●​ How It Works: If a virtual address is in the TLB, the physical address is
accessed directly.

Segmentation and Paging:

●​ Segmentation: Divides memory into variable-sized segments based on


logical divisions like code, data, stack.
●​ Paging: Divides memory into fixed-size pages for efficient management.
●​ Combined System: Uses segmentation for logical grouping and paging for
efficient physical storage.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy