0% found this document useful (0 votes)
15 views37 pages

D386 Study Guide - Essentials

The document provides an overview of computer hardware essentials, focusing on the CPU, GPU, RAM, and their respective components and functions. It covers key features of CPUs, including clock speed, core count, and cache memory, as well as the differences between integrated and dedicated GPUs. Additionally, it discusses RAM types, virtual memory, and common issues related to CPU and RAM performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views37 pages

D386 Study Guide - Essentials

The document provides an overview of computer hardware essentials, focusing on the CPU, GPU, RAM, and their respective components and functions. It covers key features of CPUs, including clock speed, core count, and cache memory, as well as the differences between integrated and dedicated GPUs. Additionally, it discusses RAM types, virtual memory, and common issues related to CPU and RAM performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Suggested Study Guide - Essentials

Quizlet to go with this study guide: https://quizlet.com/920657736/hardware-and-


operating-systems-essentials-d386-study-essentials-flash-cards/

CPU, ALU, CU, Registers, and Cache Memory


● The Central Processing Unit (CPU), often referred to as the brain of the computer, is a crucial
component responsible for executing instructions from computer programs.
● Key Features Across CPU Types
○ Clock Speed: Measured in GHz, indicates the number of cycles a CPU can perform per
second.
○ Core Count: More cores allow for better multitasking and parallel processing.
○ Threads: Hyper-threading or simultaneous multithreading (SMT) allows each core to
handle multiple threads.
○ Cache: On-chip memory that speeds up access to frequently used data.
○ TDP (Thermal Design Power): Indicates the amount of heat a CPU generates, affecting
cooling requirements.
● Key Components:
○ Arithmetic Logic Unit (ALU)
■ Function: Performs arithmetic and logical operations (addition, subtraction,
multiplication, division, AND, OR, NOT, etc.).
■ Importance: Essential for executing mathematical calculations and decision-
making processes.
○ Control Unit (CU)
■ Function: Directs the operations of the processor. It tells the computer's memory,
ALU, and I/O devices how to respond to the instructions that have been sent to
the processor.
■ Importance: Coordinates how data moves around the CPU and controls the flow
of data between the CPU and other components of the computer.
○ Registers
■ Function: Small, fast storage locations within the CPU that hold data and
instructions temporarily.
■ Importance: Provide quick access to frequently used data and instructions,
enhancing processing speed.
■ Types: Common registers include -
● Accumulator (ACC): The accumulator is a register that stores the results
of arithmetic and logical operations performed by the CPU's Arithmetic
Logic Unit (ALU). It holds the intermediate results of calculations during
program execution.
● Program Counter (PC): The program counter is a special register that
holds the memory address of the next instruction to be fetched and
executed by the CPU.
● Memory Address Register (MAR): The memory address register is a
register that holds the memory address of the data or instruction being
accessed or manipulated in the computer's memory.
● Memory Data Register (MDR): The memory data register is a register that
temporarily holds the data fetched from or to be written to the
computer's memory.
○ Cache Memory Levels
■ Function: Provides high-speed data access to the CPU, reducing the time needed
to fetch data from the main memory (RAM).
■ Importance: Larger and faster caches can significantly improve CPU
performance.
■ L1 Cache
● Proximity to CPU: Closest to the CPU cores.
● Speed: Fastest among all cache levels due to its proximity and high-
speed SRAM technology.
● Size: Smallest in size, typically ranging from 16KB to 128KB per core.
● Purpose: Primarily stores frequently accessed data and instructions to
reduce latency and improve the CPU's processing speed.
● Structure: Often split into two separate caches: one for instructions (L1i)
and one for data (L1d).
■ L2 Cache
● Proximity to CPU: Sits between the L1 cache and the main memory
(RAM).
● Speed: Slower than L1 but faster than L3 cache and main memory.
● Size: Larger than L1, typically ranging from 256KB to several megabytes
per core.
● Purpose: Acts as an intermediary store between L1 and L3, holding data
and instructions that are less frequently accessed than those in L1 but
more frequently than those in RAM.
● Structure: Can be either unified (storing both data and instructions) or
split, similar to L1.
■ L3 Cache
● Proximity to CPU: Shared among multiple CPU cores within the same
processor.
● Speed: Slower than L2 but faster than main memory.
● Size: Larger than L2, typically ranging from a few megabytes to tens of
megabytes, depending on the CPU architecture.
● Purpose: Provides a larger, shared cache that can store data and
instructions accessible by all cores, reducing the need to fetch from
slower main memory.
● Structure: Typically unified, storing both data and instructions.
○ Different Kinds of CPUs
■ Desktop CPUs
● General-Purpose CPUs: These CPUs are designed for everyday tasks
such as web browsing, office applications, and multimedia consumption.
Examples include Intel Core i3, i5, and AMD Ryzen 3, 5 series.
● High-Performance CPUs: Used for gaming, video editing, and other
intensive tasks. They have higher clock speeds, more cores/threads, and
larger cache sizes. Examples include Intel Core i7, i9, and AMD Ryzen 7,
9 series.
■ Server CPUs
● Enterprise CPUs: Designed for data centers and high-performance
computing environments. They offer high core counts, large caches, and
support for multi-processor configurations. Examples include Intel Xeon
and AMD EPYC processors.
● Blade Servers and Microservers: Used in compact and scalable server
environments. These CPUs are optimized for density and power
efficiency.
■ Mobile CPUs
● Smartphones and Tablets: CPUs in these devices prioritize power
efficiency and integrated graphics capabilities. Examples include Apple’s
A-series, Qualcomm Snapdragon, and Samsung Exynos processors.
● Laptops: Mobile CPUs for laptops balance performance and battery life.
Examples include Intel Core U and H series, AMD Ryzen Mobile series,
and Apple M1, M2 series.
■ Embedded CPUs
● Microcontrollers and System-on-Chip (SoC): Used in IoT devices,
automotive systems, and industrial applications. These CPUs are highly
integrated with peripherals and memory. Examples include ARM Cortex-
M, Cortex-A series, and Atmel AVR.
● Raspberry Pi and Similar Boards: Used for education, hobbyist projects,
and prototyping. They use ARM-based CPUs like the Broadcom BCM
series.
■ Workstation CPUs
● High-End Workstations: Designed for tasks like 3D rendering, scientific
simulations, and other professional applications. Examples include Intel
Xeon W and AMD Threadripper series.
■ Gaming Consoles
● Console CPUs: Customized for gaming, balancing CPU and GPU
performance. Examples include AMD custom processors used in
PlayStation and Xbox consoles.
■ Supercomputers
● High-Performance CPUs: These are used in supercomputing clusters,
featuring high core counts and support for parallel processing. Examples
include IBM POWER processors and Intel Xeon Phi.
○ CPU Related Issues
■ High CPU Usage
● Symptoms: Slow system performance, overheating, system lag, and
reduced battery life on laptops.
● Scenario: Running resource-intensive applications like video rendering
software or virtual machines can cause the CPU to run at high usage,
leading to overheating and potential throttling.
● Solution: Close unnecessary programs, optimize software, upgrade your
hardware or perform scheduled maintenance. You can also choose to
use more lightweight software alternatives.
■ Overheating CPU
● Symptoms: System shutdowns, reduced performance due to thermal
throttling, and potentially permanent damage to the CPU if overheating
persists.
● Scenario: A system with poor ventilation or a malfunctioning cooling
system may cause the CPU to overheat, triggering automatic shutdowns
to prevent damage.
● Solution: Improve cooling systems, optimize your CPU airflow and
monitor temperatures. You can use cooling pads under laptops and
reduce CPU voltage through BIOS/UEFI settings to decrease heat output.
■ CPU Bottleneck
● Symptoms: Inability to fully utilize high-end GPUs or other hardware
components, poor performance in games and applications despite
having powerful components.
● Scenario: Pairing a high-end GPU with an older, less powerful CPU can
create a bottleneck, preventing the GPU from performing at its full
potential.
● Solution: Ensure a balanced configuration of CPU, GPU, and RAM. Adjust
settings to reduce CPU load.Upgrade to a more powerful CPU if it is
significantly weaker than other components. Carefully overclock the CPU
to increase its performance. Minimize the number of background
processes.
■ CPU Failure
● Symptoms: System will not boot, frequent crashes, and unusual noises
from the computer.
● Scenario: A system that experiences power surges or electrical issues
might have a damaged CPU, resulting in an inability to boot or maintain
stable operation.
● Solution: Confirm the issue is with the CPU by testing other components
such as RAM, GPU, and motherboard. Check physical connections.
Inspect for physical damage and test power supply. Reset and/or
updated BIOS/UEFI.

GPU
● A GPU, or Graphics Processing Unit, is a specialized electronic circuit designed to accelerate the
processing of images and videos. It is particularly efficient at handling tasks that involve parallel
processing, where many calculations can be performed simultaneously
● Types of GPU:
○ Integrated GPUs: These are built into the CPU and share the system's RAM. They are
sufficient for basic tasks like web browsing, office applications, and some light gaming.
○ Dedicated GPUs: These are separate cards installed in the system, with their own VRAM,
and are much more powerful, suitable for demanding tasks.
● Use-cases for GPU:
○ Gaming: High-performance GPUs are essential for rendering complex graphics in real-
time, providing smooth frame rates and high-resolution textures in modern video games.
○ Video Editing and Rendering: Tasks such as video editing, 3D rendering, and animation
rely heavily on GPU power to handle intensive processing tasks efficiently, reducing
render times significantly.
○ Scientific Computing: GPUs are used in scientific research for simulations, data analysis,
and computational tasks that require massive parallel processing, such as climate
modeling, molecular dynamics, and astrophysics.
○ Machine Learning and AI: Training deep learning models involves processing large
datasets and performing numerous computations. GPUs speed up this process
significantly due to their parallel processing capabilities.
○ Professional Applications: Professionals in fields such as architecture, engineering, and
design use GPUs for CAD (Computer-Aided Design) applications, which require rendering
detailed models and simulations.

Pipelining vs. Multithreading vs. Multitasking


○ Pipelining (CPU function) is a hardware technique to increase the instruction throughput
of a CPU by overlapping instruction execution stages.
■ Stages of Pipelining
● Fetch: Retrieving the instruction from memory.
● Decode: Interpreting the instruction and preparing the necessary control
signals.
● Execute: Performing the operation specified by the instruction (e.g.,
arithmetic or logic operations).
● Memory Access: Reading from or writing to memory, if required by the
instruction.
● Write Back: Writing the result back to the register file.
○ Multithreading (Application function) involves executing multiple threads within a single
process concurrently, improving CPU utilization and performance for multithreaded
applications.
■ Thread: A thread is the smallest unit of execution within a process. A process
can contain multiple threads, each running independently but sharing the same
resources, such as memory and file handles.
■ Improved Performance: By allowing multiple threads to run simultaneously,
multithreading can significantly improve the performance of applications,
especially on multi-core processors.
■ Resource Sharing: Threads within the same process share resources like
memory and data, which can lead to more efficient use of resources.
■ Responsiveness: Multithreading can make applications more responsive. For
example, a GUI application can run a background task in a separate thread,
keeping the user interface responsive.
■ Parallelism: It enables parallelism by executing multiple threads simultaneously,
which can be particularly beneficial in computationally intensive tasks.
○ Multitasking (OS function) is an OS-level feature that allows multiple processes to run
concurrently, providing the ability to run multiple applications simultaneously.
■ Preemptive Multitasking
● Definition: The OS determines when a process should pause to allow
another process to execute. It allocates time slices to each process.
● Example: Modern operating systems like Windows, macOS, and Linux
use preemptive multitasking.
● Advantages: Ensures fair distribution of CPU time, prevents any single
process from monopolizing the CPU, improves responsiveness.
■ Cooperative Multitasking
● Definition: Each process voluntarily yields control to allow other
processes to run. Processes must be designed to provide time to the OS.
● Example: Early versions of Windows (like Windows 3.x) and classic Mac
OS used cooperative multitasking.
● Disadvantages: If a process does not yield control, it can lead to system
unresponsiveness or crashes.

RAM
● RAM (Random Access Memory): Stores data and instructions that the CPU needs quickly.
○ ARAM (Asynchronous RAM)
■ Asynchronous RAM (ARAM) is a type of RAM where memory operations are not
synchronized to a clock signal.
■ Each operation (read or write) is initiated independently and can occur at any
time without needing to wait for a clock edge.
■ Different Types:
● DRAM (Dynamic Random Access Memory)
○ Must be refreshed periodically to preserve the stored data
○ Use: General purpose memory for computers and other devices
○ Pins: Varies, early DRAM used DIP packages with 16 of 18 pins
● SRAM (Static Random Access Memory)
○ The data is not stored as charge on a capacitor, but in a pair of
cross-coupled inverters, so SRAM does not need to be refreshed.
○ Use: Cache memory in CPUs, small amounts of fast memory
○ Pins: Varies
○ SDRAM (Synchronous DRAM)
■ A type of computer memory that is synchronized with the system bus, allowing
for faster data transfer rates compared to earlier types of random-access
memory (RAM)
■ Use: Main memory in older computers and graphics cards
■ Different Types:
● SDR (Single Data Rate)
○ Pins: 168
○ Data Rate: One per clock cycle
○ Usage: Older desktop and laptop computers
● DDR (Double Data Rate)
○ Pins: 184
○ Data Rate: Twice per clock cycle
○ Usage: Early 2000s desktop and laptop computers
● DDR2
○ Pins: 240
○ Data Rate: Twice per clock cycle, but with improved speed and
efficiency over DDR
○ Latency: Lower latency compared to DDR
○ Usage: Mid-2000’s desktop and laptop computers
● DDR3
○ Pins: 240
○ Data Rate: Twice per clock cycle with further improvements over
DDR2
○ Latency: Improved latency and power efficiency compared to
DDR2
○ Usage: Late 2000’s to mid-2010’s desktop, laptop computers,
and servers
● DDR4
○ Pins: 288
○ Data Rate: Twice per clock cycle with further enhancements
○ Latency: Improved latency and power efficiency compared to
DDR3
○ Usage: Mid-2010’s to present day desktop, laptop computers,
and servers
● DDR5
○ Pins: 288
○ Data Rate: Twice per clock cycle with significant improvements
○ Latency: Further improvements in latency, power efficiency, and
increased density compared to DDR4
○ Usage: High-performance computing, modern desktop, laptop
computers, and servers
● Virtual RAM: also known as virtual memory, is a memory management technique used by
operating systems to extend the apparent amount of RAM available to applications. This is done
by using a portion of a computer's storage (such as an SSD or HDD) to simulate additional RAM.
○ Paging: The operating system divides physical memory and virtual memory into small
fixed-sized blocks called pages. When the system runs out of physical RAM, it can swap
inactive pages to the storage device, freeing up RAM for active processes.
○ Pagefile/Swap Space: On Windows systems, this is often referred to as a pagefile, while
on Unix-like systems, it is called swap space. This file or partition on the storage device is
used to store pages that are moved out of physical RAM.
○ Address Translation: The CPU uses a memory management unit (MMU) to translate
virtual addresses (used by programs) into physical addresses (used by the hardware).
This allows applications to use more memory than is physically available.
● RAM Related Issues
○ Insufficient RAM:
■ Symptoms: Slow performance, frequent freezing, slow response times when
opening applications, and excessive hard drive activity due to increased
paging/swap file usage.
■ Scenario: Running multiple applications simultaneously on a computer with 4GB
of RAM can lead to insufficient memory, causing the system to slow down and
frequently use the hard drive for virtual memory.
■ Solution: Upgrade RAM to handle the memory demands of multiple design
applications.
○ Faulty RAM
■ Symptoms: Random system crashes, blue screens of death (BSODs), data
corruption, and errors during software installation or while running programs.
■ Scenario: A computer with a damaged RAM module may experience frequent
crashes and errors, particularly when performing memory-intensive tasks like
video editing or gaming.
■ Solution: Swap or Replace Faulty RAM. If you have multiple RAM modules, test
each one individually by removing all but one and booting the computer. If a
specific module is identified as faulty, replacing it is the most effective solution.
○ Incompatible RAM
■ Symptoms: System fails to boot, constant reboots, or the BIOS does not
recognize the installed RAM.
■ Scenario: Upgrading an old motherboard with new, higher-speed RAM without
checking compatibility can lead to the system failing to boot or operating
incorrectly.
■ Solution: Make sure the RAM is installed correctly and there is no physical
damage. Replace the RAM if it is damaged or not compatible with the
motherboard specifications.
○ Memory Leaks
■ Symptoms: Gradual decrease in available memory, system slows down over
time, and applications may crash or freeze.
■ Scenario: Running poorly designed software that does not properly manage
memory allocation and deallocation can cause a memory leak, leading to a
gradual slowdown and eventual freezing of the system.
■ Solution: Look for improper memory allocation and deallocation patterns.

ROM
● ROM, or Read-Only Memory, is a type of non-volatile memory used in computers and other
electronic devices.
● Unlike RAM (Random Access Memory), which is volatile and loses its data when the power is
turned off, ROM retains its data even when the device is powered down. This makes ROM ideal
for storing firmware, which is software that is closely tied to specific hardware and unlikely to
need frequent updates.
● Types of ROM:
○ Masked ROM (MROM): This is the original form of ROM. Data is written during the
manufacturing process and cannot be altered. It's used for simple, unchangeable
firmware.
○ Programmable ROM (PROM): This type of ROM can be written once after manufacturing.
The data is programmed using a special device called a PROM programmer.
○ Erasable Programmable ROM (EPROM): EPROM can be erased by exposing it to
ultraviolet light and then reprogrammed. It allows for the modification of data if
necessary, but the process is not simple.
○ Electrically Erasable Programmable ROM (EEPROM): EEPROM can be erased and
reprogrammed using electrical charge, making it more flexible than EPROM. It allows for
byte-level modification and is commonly used for firmware updates.
○ Flash Memory: A type of EEPROM that can be erased and reprogrammed in blocks. It's
widely used in USB drives, SSDs, and memory cards due to its ability to retain data
without power and support frequent rewrites.

RAM VS. ROM


● Volatility: RAM is volatile, ROM is non-volatile.
● Usage: RAM is used for temporary storage of data in use, ROM is used for permanent storage of
firmware.
● Read/Write: RAM can be read and written to easily, ROM is primarily read-only.
● Speed: RAM is faster than ROM.
● Capacity: RAM typically has larger capacities compared to ROM.

The Motherboard and its Components


● Motherboard: A motherboard, also known as a mainboard, is a crucial component in any
computer system. It serves as the central hub that connects various hardware components and
allows them to communicate with each other.
● Main Components:
○ Chipset: The chipset is a set of integrated circuits that manage data flow between the
CPU, memory, storage devices, and peripherals. It includes the Northbridge and
Southbridge (or similar components in modern architectures).
■ Northbridge Components:
● Responsible for high-speed peripherals
● CPU Socket: This is where the Central Processing Unit (CPU) is installed.
The socket type (e.g., LGA1200, AM4) determines which CPUs are
compatible with the motherboard.
● RAM Slots: These slots hold the Random Access Memory (RAM)
modules, providing temporary storage for data that the CPU needs to
access quickly. The number and type of RAM slots vary by motherboard.
● Expansion Slots: These slots allow you to install expansion cards such
as graphics cards (PCIe), sound cards, network cards, and storage
controllers. Common slot types include PCIe x16, PCIe x1, and legacy
slots like PCI.
● Integrated Graphics (if present): Graphics processing unit integrated into
the motherboard.
■ Southbridge Components:
● Slow peripherals
● Storage Interfaces: Motherboards come with various storage interfaces
for connecting hard drives (HDDs), solid-state drives (SSDs), and optical
drives. SATA (Serial ATA) and M.2 are common interfaces for modern
storage devices.
● BIOS/UEFI Chip: This chip contains the Basic Input/Output System
(BIOS) or Unified Extensible Firmware Interface (UEFI), which provides
the firmware interface for initializing hardware during the boot process
and configuring system settings.
● I/O Ports: These ports are located on the rear I/O panel of the
motherboard and include USB ports, audio jacks, Ethernet ports, video
outputs (if the motherboard has integrated graphics), and other
connectors for peripherals.
● Fan Headers: Motherboards have headers for connecting CPU fans, case
fans, and other cooling devices to regulate system temperature.
● CMOS Battery: This small battery powers the CMOS (Complementary
Metal-Oxide-Semiconductor) memory, which stores BIOS/UEFI settings
and system configuration even when the computer is powered off.
○ Power Connectors: These connectors supply power to the motherboard from the power
supply unit (PSU). They include the main 24-pin ATX power connector, 8-pin EPS
connector for CPU power, and additional connectors for peripherals.
○ Connectors and Headers: Various connectors and headers on the motherboard include
front panel connectors (for power button, reset button, LEDs), USB headers, audio
headers, and headers for additional features like RGB lighting.
○ Network Interface Controller (NIC): Many motherboards have integrated NICs for
Ethernet connectivity, while some high-end boards may include Wi-Fi and Bluetooth
controllers.
● Motherboard Form Factors:
○ ATX (Advanced Technology eXtended):
■ Size: Standard ATX motherboards measure around 12 x 9.6 inches (305 x 244
mm).
■ Features: ATX boards offer plenty of space for components and expansion slots,
including multiple PCIe slots, RAM slots, SATA ports, and other connectors. They
are commonly used in full-sized desktop PCs and gaming rigs where
expandability and performance are priorities.
○ MicroATX (mATX):
■ Size: MicroATX motherboards are smaller than ATX, typically around 9.6 x 9.6
inches (244 x 244 mm).
■ Features: Despite their smaller size, mATX boards still offer decent expandability
with fewer PCIe slots and RAM slots compared to ATX. They are suitable for
compact desktops or budget-friendly builds that don't require extensive
expansion options.
○ Mini-ITX:
■ Size: Mini-ITX is the smallest standard form factor, measuring about 6.7 x 6.7
inches (170 x 170 mm).
■ Features: Mini-ITX boards have limited space for components but can still
accommodate a single PCIe slot, one or two RAM slots, and a few SATA ports.
They are popular for compact and portable builds such as small form factor
(SFF) PCs, HTPCs (Home Theater PCs), and gaming consoles.
○ Extended ATX (E-ATX):
■ Size: E-ATX motherboards are larger than standard ATX, often around 12 x 13
inches (305 x 330 mm) or larger.
■ Features: E-ATX boards offer more room for additional components and
features, including extra PCIe slots, RAM slots, and connectivity options. They
are commonly used in high-end desktops, workstations, and gaming setups that
require maximum performance and expandability.
○ Other Form Factors:
■ MicroBTX and FlexATX: These are less common form factors that have specific
size and layout characteristics, but they are not as widely used as ATX, mATX,
and Mini-ITX.
■ Custom Form Factors: Some manufacturers design custom form factor
motherboards for specialized applications, industrial use, or proprietary systems.

HDD, SSD, NAS Storage


● Hard Disk Drive (HDD)
○ Description: Uses spinning magnetic disks (platters) to store data. A read/write head
moves over the surface of these platters to access or write data.
○ Advantages:
■ High storage capacity at a lower cost per gigabyte.
■ Suitable for storing large amounts of data that don't require high-speed access.
○ Disadvantages:
■ Slower read/write speeds compared to SSDs.
■ More susceptible to physical shock and damage.
■ Generates more heat and noise due to moving parts.
● Solid-State Drive (SSD)
○ Description: Uses NAND flash memory to store data, providing faster data access and
retrieval than HDDs.
○ Advantages:
■ Faster read/write speeds.
■ More durable as there are no moving parts.
■ Lower power consumption and heat generation.
○ Disadvantages:
■ Higher cost per gigabyte compared to HDDs.
■ Limited write cycles, though modern SSDs have improved significantly in this
aspect.
● Network Attached Storage (NAS)
○ Description: A storage device connected to a network, allowing multiple users and
devices to access and share data.
○ Advantages:
■ Centralized storage for easy file sharing and collaboration.
■ Can be configured with RAID for data redundancy and protection.
■ Useful for backups and media streaming.
○ Disadvantages:
■ Network dependency, so performance can be affected by network speed and
reliability.
■ Initial setup and configuration can be complex for non-technical users.
● Other Storage Options
○ Cloud Storage
■ Description: Data is stored on remote servers accessed via the internet, managed
by cloud service providers.
■ Advantages:
● Accessible from anywhere with an internet connection.
● Scalable, with options to increase storage capacity as needed.
● Reduces the need for physical storage hardware.
■ Disadvantages:
● Ongoing subscription costs.
● Dependent on internet access and speed.
● Potential security and privacy concerns.
○ Hybrid Drive (SSHD)
■ Description: Combines an HDD with a small amount of SSD storage to improve
performance.
■ Advantages:
● Faster than traditional HDDs due to the SSD cache.
● More affordable than full SSDs while offering larger storage capacity.
■ Disadvantages:
● Performance improvement is limited compared to a full SSD.
● More complex technology could lead to higher failure rates.
○ External Hard Drive
■ Description: Portable HDD or SSD connected to a computer via USB, Thunderbolt,
or other interfaces.
■ Advantages:
● Easy to use and transport.
● Provides additional storage without opening up the computer case.
■ Disadvantages:
● Can be lost or damaged more easily due to portability.
● Typically slower than internal drives, especially for HDDs.
○ Tape Storage
■ Description: Uses magnetic tape for data storage, primarily used for archival
purposes.
■ Advantages:
● Extremely high capacity and cost-effective for long-term storage.
● Reliable for archiving data with a long shelf life.
■ Disadvantages:
● Slow access time as tapes need to be sequentially read.
● Requires specialized equipment for reading and writing data.
○ Storage Area Network (SAN)
■ Description: A high-speed network that provides block-level storage to multiple
servers.
■ Advantages:
● High performance and scalability.
● Centralized storage management and data protection features.
■ Disadvantages:
● Expensive and complex to set up and maintain.
● Requires specialized knowledge and equipment.

RAID
● RAID 0 (Striping)
○ Great for speed, but provides NO data redundancy. Has no loss of space on the disks.
○ Minimum Disks: 2
○ Explanation: RAID 0 requires a min of two disks because it stripes data across multiple
disks without any redundancy. This configuration is used primarily for performance
enhancement, as data is split and written simultaneously across multiple disks,
improving read/write speeds.
○ Use Case: Video editing, gaming, or any application requiring high-speed data access.
■ Reason: RAID 0 splits data across multiple disks, allowing for faster read and
write speeds. However, it does not provide redundancy, so if one disk fails, all
data is lost.
● RAID 1 (Mirroring and Redundancy)
○ Provides full redundancy, but has loss of space on one of the disks. "Failure Resistant"
"Fault Tolerant"
○ Minimum Disks: 2
○ Explanation: RAID 1 also requires a minimum of two disks. In RAID 1, data is mirrored
across two or more disks, providing redundancy. If one disk fails, the data can still be
accessed from the mirrored disk(s). RAID 1 sacrifices storage capacity for data
protection.
○ Use Case: Small businesses or home users needing data redundancy.
■ Reason: RAID 1 duplicates the same data on two or more disks. This provides
high availability since data is still accessible even if one disk fails, but it does not
improve performance or storage efficiency.
● RAID 5 (Striping and Redundancy through Parity)
○ Offers a balance between performance, storage efficiency, and fault tolerance. One of the
most common RAIDs you will see. "Failure Resistant" "Fault Tolerant"
○ Minimum Disks: 3
○ Explanation: RAID 5 requires a min of three disks. It uses striping for performance and
distributes parity across all disks in the array for redundancy. If one disk fails, the parity
information allows for data reconstruction. RAID 5 offers a balance between
performance, storage efficiency, and fault tolerance.
○ Use Case: File servers, database servers, and application servers.
■ Reason: RAID 5 combines striping with parity, providing a balance of improved
read performance, storage efficiency, and redundancy. It requires a minimum of
three disks and can tolerate the failure of one disk without data loss.
● RAID 6 (Striping and Redundancy through Double Parity)
○ Similar to RAID 5, but with double parity. Provides an additional level of fault tolerance by
using dual parity. "Fault Tolerant"
○ Minimum Disks: 4
○ Explanation: RAID 6 needs a min of four disks. It is similar to RAID 5 but provides an
additional level of fault tolerance by using dual parity. This means it can withstand the
simultaneous failure of up to two disks without data loss. RAID 6 is suitable for
applications that require high fault tolerance.
○ Use Case: Enterprise environments requiring high fault tolerance, such as data
warehouses or large-scale storage systems.
■ Reason: RAID 6 is similar to RAID 5 but with an additional parity block, allowing it
to tolerate the failure of two disks. This offers greater data protection at the cost
of more storage space used for parity.
● RAID 10 (1+0, Mirroring + Striping)
○ Two RAID arrays used together. Really good for both speed and redundancy, but you have
to use 4 disks to accomplish this. "Disaster Tolerant"
○ Minimum Disks: 4
○ Explanation: RAID 10 combines mirroring (RAID 1) and striping (RAID 0). It requires a min
of four disks arranged in pairs, where each pair is mirrored, and data is striped across
these mirrored pairs. RAID 10 provides both redundancy (through mirroring) and
performance benefits (through striping).
○ Use Case: High-performance databases and applications requiring both high availability
and performance.
■ Reason: RAID 10 combines the benefits of RAID 0 and RAID 1 by striping data
across mirrored pairs of disks. This provides high redundancy and performance,
but it requires a minimum of four disks and is less storage-efficient.
IaaS, PaaS, SaaS, and DaaS.
● Cloud Computing Models
○ IaaS (Infrastructure as a Service): Provides virtualized computing resources over the
internet.
■ Software Examples: Amazon Web Services (AWS), Microsoft Azure, Google
Cloud Platform (GCP), IBM Cloud.
■ System Examples: Virtual machines (VMs), storage services, networking
resources, and other infrastructure components provided by IaaS providers.
○ PaaS (Platform as a Service): Offers platforms and tools for application development and
deployment.
■ Software Examples: Heroku, Google App Engine, Microsoft Azure App Service,
AWS Elastic Beanstalk.
■ System Examples: Development frameworks, database management systems,
application hosting environments, and other tools and services for developers to
build, deploy, and manage applications without managing the underlying
infrastructure.
○ SaaS (Software as a Service): Delivers software applications over the internet on a
subscription basis.
■ Software Examples: Salesforce, Google Workspace (formerly G Suite), Microsoft
Office 365, Dropbox.
■ System Examples: Fully functional software applications delivered over the
internet on a subscription basis, allowing users to access and use the software
without needing to install or manage it locally.
○ DaaS (Desktop as a Service): Virtual desktop infrastructure provided over the internet.
■ Software Examples: Citrix Virtual Apps and Desktops, VMware Horizon Cloud,
Microsoft Windows Virtual Desktop.
■ System Examples: Virtual desktop infrastructure (VDI) solutions that provide
users with virtual desktop environments hosted and managed in the cloud,
allowing access to desktops and applications from any device with an internet
connection.

Tech Stacks
● LAMP Stack
○ Components
■ Linux: Operating system
■ Apache: Web server
■ MySQL: Database management system
■ PHP: Server-side scripting language
○ Common Use: Traditional web development stack, popular for PHP-based applications.
○ Advantages:
■ Open Source: All components are open source, which makes it cost-effective.
■ Security: Linux and Apache are known for their robust security features.
■ Stability and Reliability: Linux and Apache are mature technologies with a strong
track record of stability and reliability.
■ Community Support: Extensive community support and documentation are
available.
■ Flexibility: Works well for a variety of web applications.
○ Disadvantages:
■ Learning Curve: May have a steeper learning curve, especially for those not
familiar with Linux.
■ Performance: PHP is generally slower compared to some modern server-side
languages.
■ Configuration: Requires manual configuration and tuning for optimal
performance.
● WAMP Stack
○ Components
■ Windows: Operating system
■ Apache: Web server
■ MySQL: Database management system
■ PHP: Server-side scripting language
○ Common Use: Similar to LAMP but used in Windows environments.
○ Advantages:
■ Ease of Use: Easier to set up and manage on Windows systems.
■ Development Tools: Compatible with various Windows-based development tools.
■ Flexibility: Similar to LAMP but easier for developers familiar with Windows.
○ Disadvantages:
■ Cost: Windows operating system can be expensive compared to Linux.
■ Performance: May not perform as well as LAMP in high-load scenarios.
■ Security: Windows may have more vulnerabilities compared to Linux.
● MEAN Stack
○ Components
■ MongoDB: NoSQL database
■ Express.js: Web application framework for Node.js
■ AngularJS or Angular: Frontend JavaScript framework
■ Node.js: JavaScript runtime environment
○ Common Use: Modern web development stack using JavaScript across the entire
application.
○ Advantages:
■ JavaScript Everywhere: Single language (JavaScript) across the entire stack
simplifies development.
■ Real-Time Applications: Well-suited for real-time applications due to Node.js’s
non-blocking architecture.
■ Performance: Node.js offers high performance and scalability.
■ NoSQL Database: MongoDB is flexible and handles large volumes of data well.
○ Disadvantages:
■ Complexity: Can be complex to set up and manage, especially for beginners.
■ Security: JavaScript has inherent security risks that need careful management.
■ Community: While growing, the community is not as large or mature as LAMP’s.
● MEVN Stack
○ Components
■ MongoDB: NoSQL database
■ Express.js: Web application framework for Node.js
■ Vue.js: Frontend JavaScript framework
■ Node.js: JavaScript runtime environment
○ Common Use: Similar to MEAN but uses Vue.js for the frontend instead of Angular. More
similar to MEAN than MERN.
○ Advantages:
■ JavaScript Everywhere: Consistent language across the stack.
■ Vue.js: Vue.js is lightweight, flexible, and easy to learn.
■ Performance: Node.js offers excellent performance and scalability.
■ NoSQL Database: MongoDB provides flexibility and handles large volumes of
data well.
○ Disadvantages:
■ Community: Vue.js has a smaller community compared to Angular and React.
■ Maturity: Vue.js is relatively newer and may have fewer resources compared to
Angular or React.
■ Security: Similar to other JavaScript-based stacks, security needs careful
management.
● MERN Stack
○ Components
■ MongoDB: NoSQL database
■ Express.js: Web application framework for Node.js
■ React: Frontend JavaScript library
■ Node.js: JavaScript runtime environment
○ Common Use: Similar to MEAN but uses React for the frontend instead of Angular.
○ Advantages:
■ JavaScript Everywhere: Single language for the entire stack.
■ React: React offers a powerful and flexible way to build user interfaces.
■ Performance: Node.js ensures high performance and scalability.
■ NoSQL Database: MongoDB provides flexibility and handles large data volumes
effectively.
○ Disadvantages:
■ Complexity: Can be complex to manage, especially for newcomers.
■ Learning Curve: React has a steep learning curve compared to some other
frontend frameworks.
■ Security: Like other JavaScript stacks, security needs careful handling.

Programming Languages and Frameworks


● Frontend Languages
○ HTML (Hypertext Markup Language): Used for creating the structure of web pages.
○ CSS (Cascading Style Sheets): Used for styling the appearance of web pages.
○ JavaScript: A scripting language used for adding interactivity and dynamic content to
web pages.
■ TypeScript: A superset of JavaScript that adds static typing and other features.
■ React: A JavaScript library for building user interfaces.
■ Angular: A TypeScript-based framework for building web applications.
■ Vue.js: A progressive JavaScript framework for building user interfaces.
○ Bootstrap: A front-end framework for designing responsive and mobile-first websites.
○ Sass (Syntactically Awesome Style Sheets): A CSS preprocessor that adds features like
variables, mixins, and nesting.
● Backend Languages
○ PHP: A server-side scripting language commonly used for web development.
○ Python: A versatile language used for web development, data science, automation, and
more.
■ Django: A high-level Python web framework that encourages rapid development
and clean, pragmatic design.
○ Ruby: A dynamic, object-oriented language often used with the Ruby on Rails framework
for web development.
○ Java: A widely-used language for building enterprise-level applications, including web
applications.
■ Spring: A powerful Java application framework for building enterprise-level
applications.
○ C#: A language developed by Microsoft, commonly used for building Windows
applications and web services.
○ Node.js (JavaScript): While JavaScript is primarily a frontend language, Node.js allows it
to be used for server-side development.
○ Go (Golang): A language developed by Google known for its simplicity, concurrency
support, and performance.
○ Scala: A language that combines object-oriented and functional programming paradigms,
often used in big data processing with frameworks like Apache Spark.
○ Swift: Developed by Apple, used for iOS, macOS, watchOS, and tvOS app development.
○ Kotlin: A modern language used for Android app development, also interoperable with
Java.

IDEs

Name of IDE Code Used Characteristics

IntelliJ IDEA Java Most User Friendly

Visual Studio Anything from HTML to C++ Used to develop websites, web
apps and mobile apps

Eclipse Python, Java, C++ Extensible plugin environment


and drag and drop

PyCharm Python Developed by JetBrains


exclusively for Python

Atom Web Development (HTML, CSS, Highly customizable, but high


Javascript) start-up time

NetBeans Java Allows apps to be development


from components called
modules

AWS Cloud9 C, C++, Python, Javascript, etc. Developed by Amazon, cloud


based for collaboration

Code::Blocks C++ Best IDE for C and C++


Jupyter Notebook Python, but supports other Not actually a full-fledged IDE, is
languages as well an open-source web-based
application

CouchDB vs. MongoDB.


Both databases are NoSQL solutions that excel in handling unstructured or semi-structured data.
Choosing between CouchDB and MongoDB often depends on specific use case requirements, scalability
needs, and developer familiarity with the respective technologies.
● Data Model:
○ CouchDB: Uses a document-oriented data model where data is stored as JSON
documents. Documents in CouchDB are stored in collections.
○ MongoDB: Also uses a document-oriented data model with JSON-like documents stored
in collections.
● Query Language:
○ CouchDB: Uses MapReduce for querying data.
○ MongoDB: Uses a flexible query language similar to SQL but tailored for document
databases.
● Replication:
○ CouchDB: Provides a built-in replication protocol for syncing data between nodes.
○ MongoDB: Also supports replication but requires additional configuration for setting up
replica sets.
● Consistency:
○ CouchDB: Emphasizes eventual consistency, allowing for easier scaling but potentially
leading to some data latency.
○ MongoDB: Offers strong consistency by default, ensuring that data is consistent across
replicas.
● Architecture:
○ CouchDB: Designed as a single-node database but can be clustered for high availability.
○ MongoDB: Designed for distributed clusters from the start, providing better scalability
options out of the box.
● Community and Ecosystem:
○ CouchDB: Has a smaller but dedicated community with a focus on decentralized data
storage.
○ MongoDB: Has a larger community and a rich ecosystem with extensive third-party tools
and integrations.
● Use Cases:
○ CouchDB: Well-suited for applications requiring offline sync and decentralized data
storage, such as mobile apps or edge computing scenarios.
○ MongoDB: Commonly used for a wide range of applications including content
management systems, e-commerce platforms, and real-time analytics.

Compiled vs. Interpreted Programming Languages


● Interpreted Languages:
○ Python
○ JavaScript
○ Ruby
○ PHP
○ Perl
○ Shell Scripting languages (Bash, PowerShell, etc.)
○ Lisp
○ Tcl
● Compiled Languages:
○ C
○ C++
○ Java (Java is compiled to bytecode, which is then interpreted by the Java Virtual
Machine, making it a combination of compiled and interpreted)
○ C#
○ Swift
○ Objective-C
○ Rust
○ Go (Golang)
○ Fortran
○ Ada
○ Pascal
○ COBOL

Network Topologies
● Ring Topology: In a ring topology, devices are connected in a closed loop, forming a ring. Each
device is connected to exactly two other devices, creating a continuous pathway for data
transmission. Data travels in one direction around the ring until it reaches its destination. Ring
topologies are less common in modern networks due to their susceptibility to a single point of
failure.


● Star Topology: In a star topology, devices are connected to a central hub or switch. All
communication between devices passes through this central point. This structure simplifies
troubleshooting and allows for easy addition or removal of devices without disrupting the entire
network. However, if the central hub fails, the entire network can be affected.


● Mesh Topology: A mesh topology involves every device being connected to every other device in
the network. This creates multiple paths for data to travel, offering redundancy and fault
tolerance. Mesh topologies can be either full mesh (every device is connected to every other
device) or partial mesh (only some devices have multiple connections). While mesh topologies
are robust, they can be complex to manage and require more cabling.


● Bus Topology: In a bus topology, all devices are connected to a single cable called a bus. Data
travels along the bus, and each device receives the data intended for it. Devices can be
connected to the bus using T-connectors. Bus topologies are straightforward and require less
cabling, but they can suffer from signal degradation if the cable length or number of devices
exceeds certain limits.

● Tree Topology: A tree topology combines characteristics of bus and star topologies. Devices are
arranged hierarchically, with groups of devices connected to a central bus or switch, and these
groups are further connected to each other or to other branches. Tree topologies are scalable
and can accommodate large networks, but they can be complex to manage and may still be
vulnerable to a central point of failure depending on the structure.

Patch Panels, Hubs, Switches, and Routers


● Patch Panels
○ Used to organize and manage network cables in a structured manner. They typically
consist of ports where network cables can be terminated and connected.
■ Example: A 24-port patch panel in a data center allows for the termination of
multiple network cables from different areas of the building. Each port can be
connected to a specific device or to another network component like a switch or
router.
● Hubs
○ Hubs are basic networking devices that operate at the physical layer (Layer 1) of the OSI
model. They receive data from one port and broadcast it to all other ports, which means
they lack intelligence to selectively send data to specific devices.
■ Example: A 4-port Ethernet hub connects multiple computers in a small office
network. However, hubs are not commonly used in modern networks due to their
limited functionality and susceptibility to network congestion.
● Switches
○ Intelligent devices that operate at the data link layer (Layer 2) of the OSI model. They can
selectively forward data to specific devices based on their MAC addresses, improving
network efficiency compared to hubs.
■ Example: A 24-port Gigabit Ethernet switch in an enterprise network connects
various devices such as computers, printers, and servers. Switches are essential
for creating local area networks (LANs) and segmenting network traffic.
● Routers
○ Routers operate at the network layer (Layer 3) of the OSI model and are responsible for
forwarding data packets between different networks. They use routing tables to
determine the best path for data transmission.
■ Example: A home wireless router connects multiple devices (e.g., smartphones,
laptops, smart TVs) to the internet through a broadband connection. Routers play
a crucial role in wide area networks (WANs) by directing traffic between local
networks and the internet.
● Access Points (APs)
○ An access point is a device that allows wireless devices to connect to a wired network
using Wi-Fi or related standards. It acts as a bridge between the wireless devices and the
wired network, providing a means for devices to access the internet or other network
resources. Access points are commonly used in homes, offices, and public places to
extend the range of a wireless network.
● Repeaters
○ A repeater is a network device used to extend the range of a network by amplifying the
signal. It receives a network signal, amplifies it, and retransmits it to areas where the
original signal is weak or cannot reach. Repeaters are useful in large buildings or areas
with physical obstructions that interfere with signal strength.

Quick Summary of the OSI Model


1. Physical Layer: Cables, hubs, NICs
2. Data Link Layer: Switches, bridges
3. Network Layer: Routers, Layer 3 switches
4. Transport Layer: Firewalls (partially), TCP/UDP
5. Session Layer: Session management protocols, authentication
6. Presentation Layer: Data translation/encryption, compression services
7. Application Layer: Web browsers, email clients, FTP, DNS

PAN, MAN, LAN, and WAN


● PAN (Personal Area Network):
○ A PAN is the smallest type of network, typically used for communication among devices
owned by an individual person. It covers a range of around 10 meters and is often used
for connecting devices like smartphones, laptops, tablets, and personal digital assistants
(PDAs).
■ Example: Bluetooth connections between a smartphone and a wireless headset
or a laptop connected to a wireless mouse and keyboard form a PAN.
● LAN (Local Area Network):
○ A LAN is a network that spans a small area, such as a home, office, or building. It
connects devices within this limited area, allowing them to share resources like files,
printers, and internet access.
■ Example: The Wi-Fi network in your home connecting your devices (computers,
smartphones, smart TVs) to the internet is an example of a LAN.
● MAN (Metropolitan Area Network):
○ A MAN covers a larger geographical area than a LAN but is smaller than a WAN. It
typically spans a city or a large campus, connecting multiple LANs together.
■ Example: A network connecting several buildings within a university campus or a
city-wide network linking government offices and institutions is a MAN.
● WAN (Wide Area Network):
○ A WAN is a network that covers a large geographical area, such as a country or even
multiple countries. It connects LANs and MANs over long distances, often using public or
private telecommunications networks.
■ Example: The internet is the most prominent example of a WAN, connecting
devices and networks globally, allowing communication and data exchange
across vast distances.
CAT Cables
● Current CAT Cable Technology:
○ Category 5 (Cat 5): Up to 100 Mbps
■ Use Case: Suitable for basic network needs such as connecting computers to a
router or modem, commonly used in residential applications.
○ Category 5e (Cat 5e): Up to 1 Gbps (1000 Mbps)
■ Use Case: Improved version of Cat 5, offering better performance and reduced
crosstalk. Commonly used for home and small business networks, it supports
Gigabit Ethernet.
○ Category 6 (Cat 6): Up to 1 Gbps (1000 Mbps)
■ Use Case: Suitable for more demanding networking environments such as
medium-sized businesses, supports higher speeds and better signal quality
compared to Cat 5e.
○ Category 6a (Cat 6a): Up to 10 Gbps
■ Use Case: Enhanced version of Cat 6 with better shielding to reduce crosstalk,
suitable for larger networks and data centers requiring consistent high-speed
performance over longer distances.
○ Category 7 (Cat 7): Up to 10 Gbps
■ Use Case: Offers even better shielding and performance than Cat 6a, suitable for
high-speed networking environments, data centers, and enterprise applications
requiring robust and reliable connections.
○ Category 8 (Cat 8): Up to 25/40 Gbps (depending on the cable type and length)
■ Use Case: Designed for high-speed data centers and server rooms, supports
extremely high-speed data transfer and is intended for short-distance
applications up to 30 meters, primarily used in professional and industrial
environments.
● Obsolete CAT Cable Technology:
○ Category 1 (Cat 1): Up to 1 Mbps
■ Use Case: Primarily used for telephone communications and early data
communications, not suitable for modern networking.
○ Category 2 (Cat 2): Up to 4 Mbps
■ Use Case: Used for older token ring networks and early data transmission, largely
obsolete.
○ Category 3 (Cat 3): Up to 10 Mbps
■ Use Case: Used in early Ethernet networks (10BASE-T), phone systems, and
some token ring networks, was popular in the early 1990s but has since been
replaced by higher categories.
○ Category 4 (Cat 4): Up to 16 Mbps
■ Use Case: Used for token ring networks (16 Mbps) and some early Ethernet
networks, also largely obsolete today.

Computer Viruses
● Trojans: Named after the Trojan Horse from Greek mythology, Trojans are deceptive programs
that appear legitimate but carry malicious payloads. They don't self-replicate but can create
backdoors for attackers, steal data, or damage systems.
● Rootkits: Rootkits are advanced malware that hide their presence and activities on a
compromised system. They often manipulate system functions to evade detection and gain
privileged access (root/administrator) to the system.
● Companion Virus: A companion virus attaches itself to legitimate program files and creates a
new executable file with a different extension. When the user runs the legitimate program, the
companion virus also executes, potentially causing harm.
● Macro Virus: These viruses infect documents or files that support macros (automated sequences
of commands) such as Microsoft Word or Excel documents. When the infected document is
opened, the macro virus can execute malicious actions.
● Armored Virus: Armored viruses use techniques to make it difficult for antivirus programs to
detect and analyze them. They may encrypt or obfuscate their code, making it challenging for
security software to identify them.
● Botnet: A botnet is a network of compromised computers (bots) controlled by a central
command (botmaster). Botnets can be used for various malicious activities, such as launching
distributed denial-of-service (DDoS) attacks or sending spam emails.
● Stealth Virus: Stealth viruses actively hide themselves from detection by antivirus software. They
often intercept and modify system calls to conceal their presence and actions.
● Polymorphic Virus: Polymorphic viruses are capable of changing their appearance (code
structure or encryption) each time they infect a new file or system. This variability makes them
difficult to detect using traditional signature-based methods.
● Multipartite Virus: Multipartite viruses are hybrids that can infect both executable files and boot
sectors of storage devices. They combine the characteristics of file infectors and boot sector
viruses, making them more complex and potentially damaging.

IPS, IDS, and UTM


● IDS (Intrusion Detection System):
○ An IDS is a passive security system that monitors network traffic or system activities for
malicious activities or policy violations.
○ It analyzes incoming and outgoing packets and flags any suspicious patterns or
anomalies based on predefined rules or signatures.
○ When an IDS detects a potential threat, it generates alerts or notifications to notify
administrators, but it doesn't take direct action to stop the threat.
● IPS (Intrusion Prevention System):
○ An IPS is an active security system that not only detects malicious activities but also
takes proactive measures to block or prevent them from compromising the network or
system.
○ It can automatically respond to threats by blocking malicious traffic, modifying firewall
rules, or taking other protective actions in real time.
○ IPS systems are more advanced than IDS in terms of real-time threat prevention and
mitigation capabilities.
● UTM (Unified Threat Management):
○ UTM is a comprehensive security solution that integrates multiple security features and
functionalities into a single platform or device.
○ It typically combines firewall, antivirus, IDS/IPS, VPN, content filtering, and other security
components to provide all-round protection against various cyber threats.
○ UTM solutions are designed to simplify security management by offering a centralized
console for monitoring and managing different security aspects within an organization.
● Firewall:
○ A firewall is a network security device or software that monitors and controls incoming
and outgoing network traffic based on predetermined security rules.
○ Its primary purpose is to establish a barrier between a trusted internal network and
untrusted external networks, such as the internet, to prevent unauthorized access and
cyber threats.

Type 1 Hypervisor, Type 2 Hypervisor and Containers


● Type 1 Hypervisor:
○ Characteristics:
■ Also known as a bare-metal hypervisor.
■ Installed directly on the physical hardware of a server.
■ Manages guest operating systems directly.
■ Examples include VMware ESXi, Microsoft Hyper-V, and Citrix XenServer.
■ Suitable for large-scale virtualization in enterprise environments.
■ Offers high performance and resource efficiency.
○ When to Use:
■ Use Type 1 hypervisors in enterprise environments where you need to run
multiple virtual machines on a single physical server, ensuring strong isolation
between VMs and efficient resource utilization.
● Type 2 Hypervisor:
○ Characteristics:
■ Also known as a hosted hypervisor.
■ Installed on top of an existing operating system (OS).
■ Requires the host OS to manage hardware resources.
■ Manages guest operating systems within the host OS.
■ Examples include Oracle VirtualBox, VMware Workstation, and Parallels Desktop.
■ Suitable for desktop or development environments.
■ Generally less efficient in resource usage compared to Type 1 hypervisors.
○ When to Use:
■ Use Type 2 hypervisors on desktop computers or development environments
where you want to run virtual machines without modifying the host OS. They are
also suitable for testing and experimenting with different operating systems.
● Containers:
○ Characteristics:
■ Lightweight, portable, and isolated environments for running applications.
■ Share the host OS kernel, making them more lightweight than virtual machines.
■ Use containerization technologies like Docker or Kubernetes.
■ Ideal for microservices architecture, DevOps workflows, and rapid application
deployment.
■ Each container encapsulates an application and its dependencies, ensuring
consistency across different environments.
■ Efficient use of resources compared to virtual machines.
○ When to Use:
■ Use containers for deploying and managing applications in a lightweight,
scalable manner. They are ideal for microservices architectures, continuous
integration/continuous deployment (CI/CD) pipelines, and cloud-native
applications.
■ Containers are particularly useful when you need rapid deployment, scalability,
and easy management of application environments.

Availability, Reliability, Usability, Scalability, and Maintainability


● Availability:
○ Availability refers to the ability of a system or service to be operational and accessible
when needed.
○ It is often measured as a percentage of uptime versus downtime over a specific period.
○ High availability systems are designed to minimize downtime and ensure continuous
operation even during failures or maintenance activities.
● Reliability:
○ Reliability is the ability of a system or component to perform its functions consistently
and accurately under specific conditions for a specified period.
○ It is related to the probability of failure-free operation within a given time frame.
○ Reliable systems are less prone to failures and errors, contributing to overall system
stability and performance.
● Usability:
○ Usability refers to how easy and efficient it is for users to interact with a system or
product to achieve their goals effectively.
○ It includes factors such as user interface design, navigation, accessibility, and user
experience.
○ A system with good usability is intuitive, user-friendly, and meets the needs of its
intended users.
● Scalability:
○ Scalability is the capability of a system to handle increased workload or demand by
adding resources such as processing power, memory, or storage.
○ It can be vertical scalability (adding more resources to a single machine) or horizontal
scalability (adding more machines to a system).
○ Scalable systems can accommodate growth without significantly affecting performance
or functionality.
● Maintainability:
○ Maintainability refers to how easily and effectively a system can be maintained, modified,
or repaired over its lifecycle.
○ It includes factors such as code readability, modularity, documentation, and the ease of
debugging and troubleshooting.
○ A maintainable system is cost-effective to manage and can be adapted to changes in
requirements or technology.

Non-Functional Requirements
● Non-functional requirements (NFRs) are criteria that specify the operation of a system, as
opposed to the specific behaviors or functions the system must support.
● They define the overall qualities or attributes of the system, focusing on how the system
performs rather than what the system does.
● Key types of non-functional requirements:
○ Performance: This includes response time, throughput, resource utilization, and
scalability. It specifies how well the system performs under various conditions.
i. Example: The system must handle 1,000 concurrent users with a response time
of less than 2 seconds.
○ Reliability: This refers to the system's ability to function correctly and consistently over
time. It includes factors such as availability, mean time to failure (MTTF), and mean time
to repair (MTTR).
i. Example: The system must be available 99.99% of the time.
○ Usability: This involves the ease with which users can learn, use, and interact with the
system. It includes user interface design, accessibility, and user documentation.
i. Example: New users should be able to complete the primary task within 10
minutes of first use.
○ Security: This encompasses the protection of the system and its data from unauthorized
access and breaches. It includes authentication, authorization, encryption, and auditing.
i. Example: All sensitive data must be encrypted using AES-256.
○ Maintainability: This refers to the ease with which the system can be modified, including
bug fixes, upgrades, and other changes. It includes modularity, readability, and testability.
i. Example: Code changes must be able to be deployed to production within 24
hours of being identified.
○ Scalability: This describes the system's ability to handle increased load by adding
resources, such as processing power, memory, or storage.
i. Example: The system must be able to scale horizontally to support an additional
10,000 users.
○ Interoperability: This involves the system's ability to work with other systems or
components, often through standardized interfaces or protocols.
i. Example: The system must support data exchange with third-party systems
using RESTful APIs.
○ Portability: This refers to the system's ability to operate in different environments or
platforms with minimal changes.
i.Example: The application must be able to run on both Windows and Linux
operating systems.
○ Efficiency: This focuses on the system's use of resources, including CPU, memory, disk
space, and network bandwidth.
i. Example: The system must not exceed 70% CPU utilization under normal
operation.
○ Compliance: This involves adherence to regulatory, legal, and industry standards.
i. Example: The system must comply with GDPR regulations for data protection.

Authorization, Authentication, and Accounting


● Authentication: Authentication is the process of verifying the identity of a user or system. It
ensures that the entity requesting access is who they claim to be. Common authentication
methods include passwords, biometrics (like fingerprints or facial recognition), smart cards, and
two-factor authentication (2FA). Authentication establishes the user's identity before granting
access to resources.
● Authorization: Authorization, on the other hand, occurs after authentication and determines what
actions or resources a user or system is allowed to access. It involves defining permissions and
privileges based on the user's identity and role. For example, a user who has successfully
authenticated may be authorized to read, write, or execute certain files or access specific areas
of a network based on their permissions.
● Accounting: Accounting refers to the tracking and logging of user activities and resource usage.
It involves recording information such as who accessed a system, what actions they performed,
when these actions occurred, and how much network bandwidth or system resources were
consumed. Accounting helps in monitoring and auditing system usage, detecting unauthorized
access or suspicious activities, and generating usage reports for billing or compliance purposes.

Peer-to-Peer and Client-Server models


● Peer-to-Peer
○ In a peer-to-peer (P2P) model, devices in the network act both as clients and servers,
meaning they can both request resources from other devices and provide resources to
them. There is no central server managing the network; instead, each device can directly
communicate and share resources with other devices in the network.
■ In a P2P model, there is no centralized control or management; each device
manages its resources independently.
■ In P2P, resources are shared directly between peers without going through a
central entity.
■ P2P networks can be highly scalable as each device can contribute resources,
but they may face challenges in managing large-scale networks efficiently.
■ P2P networks can be more resilient to failures as there is no single point of
failure. Other nodes can continue to operate even if some nodes go offline.
■ P2P networks can be more challenging to secure due to the distributed nature of
resources and potential vulnerabilities in peer connections.
● Client-Server
○ On the other hand, in a client-server model, there are distinct roles.The client is a device
or program that requests services or resources from a central server. The server, in turn,
is a dedicated computer or program that provides services or resources to clients upon
request. This model typically involves a centralized architecture where the server
manages and controls access to resources, such as files, databases, applications, or
services.
■ In a client-server model, the server centrally manages and controls access to
resources, enforcing security policies and managing user requests.
■ In client-server, resources are typically accessed through the server, which acts
as a gateway to the resources.
■ Client-server architectures can also scale well, especially with load-balancing
techniques and scalable server setups, but they require more centralized
management as the network grows.
■ Client-server architectures may face challenges in fault tolerance if the central
server experiences failures, although redundancy and failover mechanisms can
be implemented to improve resilience.
■ Client-server architectures allow for centralized security measures, making it
easier to implement and manage security protocols and access controls.

Cloud models such as Public, Private, Hybrid, and Community


● Public Cloud:
○ Summary: Public cloud refers to services provided over the internet by third-party
providers. These services are available to anyone who wants to use or purchase them.
○ Examples: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP),
IBM Cloud.
● Private Cloud:
○ Summary: Private cloud involves a dedicated cloud environment used exclusively by a
single organization. It can be hosted on-premises or by a third-party provider.
○ Examples: VMware vCloud, Microsoft Azure Stack, OpenStack.
● Hybrid Cloud:
○ Summary: Hybrid cloud combines elements of public and private clouds, allowing data
and applications to be shared between them. It offers flexibility and scalability.
○ Examples: Using a public cloud for non-sensitive data and a private cloud for sensitive
data, or running some applications on-premises and others in the public cloud.
● Community Cloud:
○ Summary: Community cloud is shared by several organizations with common interests,
such as regulatory requirements or security concerns. It can be managed internally or by
a third-party provider.
○ Examples: Government clouds shared by different government agencies, healthcare
clouds shared by multiple healthcare providers.

Program and Device I/O


● Program Input/Output (I/O) refers to the interaction between a computer program and its
environment, which includes the user, other software systems, and hardware devices. The
purpose of Program I/O is to enable a program to receive input from external sources, process
that input, and produce output for various purposes. Here are some key purposes of Program I/O:
○ User Interaction: Programs use I/O to interact with users. This includes reading user
inputs (like keyboard input or mouse clicks) and displaying information to users (via
monitors, graphical interfaces, etc.).
○ Data Exchange: Programs often need to exchange data with other programs or systems.
This can involve reading data from files, databases, or network connections, and writing
data back to these sources.
○ Control and Monitoring: I/O is crucial for controlling and monitoring hardware devices.
For example, a program might need to control a printer, scanner, or robotic system by
sending commands and receiving status updates.
○ Error Handling: I/O is used for error handling and reporting. Programs need to detect and
handle errors during input processing or output operations, such as file not found errors
or network connection failures.
○ Integration with External Systems: Many programs are part of larger systems or
networks. I/O facilitates communication and integration between different components
of these systems, allowing data to flow seamlessly between them.
○ Logging and Debugging: I/O is often used for logging program activities and debugging.
Programs can write log files to record events, errors, and performance metrics for
analysis and troubleshooting.
● Input Devices
○ Keyboard
○ Mouse
○ Touchscreen
○ Scanner
○ Camera or Webcam
○ Joystick or Game Controller
○ Stylus
○ Biometric Devices
● Output Devices
○ Monitor
○ Printer
○ Speakers
○ Headphones or Earphones
○ Projector
○ Plotter
○ VR Headset
○ Braille Display

TCP/IP and UDP


● A set of communication protocols used to connect network devices on the internet. It dictates
how data should be packaged, transmitted, and received, ensuring reliable and efficient
communication between different devices.
● Key Components:
○ Internet Protocol (IP)
■ User Datagram Protocol (UDP) is a core component of the Internet Protocol (IP)
suite, commonly used for tasks requiring efficient, low-latency communication.
UDP is connectionless, meaning it does not establish a connection before data
transfer. This makes it faster but less reliable than TCP
■ IP Addressing: Responsible for addressing and routing packets of data so that
they can travel across networks and arrive at the correct destination. Every
device connected to the internet is assigned a unique IP address.
■ Packet Switching: Data is broken down into smaller packets, which are
transmitted independently and reassembled at the destination.
○ Transmission Control Protocol (TCP)
■ Reliable Communication: Ensures that data is delivered accurately and in the
same order in which it was sent. It establishes a connection between the sender
and receiver before data transmission starts.
■ Error Checking: Includes mechanisms for error detection and correction, ensuring
data integrity.
■ Flow Control and Congestion Control: Manages the rate of data transmission to
prevent network congestion and ensures that the receiver can handle the
incoming data.
● Model Layers
○ Application Layer
■ Protocols: HTTP, FTP, SMTP, DNS, etc.
■ Role: Provides various network services to applications. It defines how
applications interact with the network.
○ Transport Layer
■ Protocols: TCP, UDP
■ Role: Manages end-to-end communication and data transfer between devices. It
ensures reliable data transmission with TCP or faster, but less reliable,
communication with UDP.
○ Internet Layer
■ Protocols: IP, ICMP, ARP
■ Role: Handles logical addressing and routing of data packets across networks. It
is responsible for packet forwarding and routing.
○ Network Interface Layer
■ Protocols: Ethernet, Wi-Fi, etc.
■ Role: Deals with the physical transmission of data over network hardware. It
defines how data is formatted for transmission over various physical media.

MAC, ARP, DNS and DHCP


● A MAC (Media Access Control) address is a unique identifier assigned to network interfaces for
communications on the physical network segment.
● ARP (Address Resolution Protocol) is a protocol used to map an IP address to a MAC address on
a local network.
● DNS (Domain Name System) is a hierarchical and decentralized naming system used to translate
human-readable domain names (like www.example.com) into IP addresses (like 192.0.2.1) that
computers use to identify each other on the network.
● DHCP (Dynamic Host Configuration Protocol) is a network management protocol used to
automatically assign IP addresses and other network configuration parameters to devices on a
network, allowing them to communicate efficiently.
● Automatic Private IP Addressing (APIPA) is a feature in Microsoft Windows that allows devices
to automatically assign IP addresses to themselves. It's primarily used when a Dynamic Host
Configuration Protocol (DHCP) server isn't available or isn't responding. APIPA is enabled by
default and has been included in Windows since Windows 98 and Windows ME.
How CPU Communicates with RAM
● The CPU communicates with RAM through a series of steps involving the system bus, which
includes the address bus, data bus, and control bus. Here's a breakdown of the process:
○ Addressing: When the CPU wants to read from or write to a specific location in RAM, it
places the address of that memory location on the address bus. The address bus is a set
of parallel wires that carry the address information to the RAM.
○ Control Signals: The CPU sends control signals over the control bus to specify whether it
wants to perform a read or write operation. Common control signals include the
read/write signal and the memory enable signal.
■ Read Operation: If the CPU wants to read data from RAM, it sends a read signal.
■ Write Operation: If the CPU wants to write data to RAM, it sends a write signal.
○ Data Transfer:
■ For a read operation, the RAM places the requested data on the data bus. The
data bus is a set of parallel wires that carry the actual data between the CPU and
RAM. The CPU then reads this data from the data bus.
■ For a write operation, the CPU places the data to be written on the data bus, and
the RAM stores this data at the specified address.
○ Timing and Synchronization: The entire communication process is synchronized by the
system clock, which ensures that the data is transferred at the correct times and at the
correct speed. Both the CPU and RAM operate according to the clock signals to maintain
coordination.

Services the Operating System (OS) Provides


● An operating system (OS) provides a wide range of services to manage hardware and software
resources, ensuring the smooth functioning of the system and providing a user-friendly interface.
Here are the primary services an OS provides:
○ Process Management:
■ Process Scheduling: Managing the execution of processes by scheduling CPU
time.
■ Process Creation and Termination: Creating and terminating processes as
needed.
■ Process Synchronization: Coordinating processes to ensure proper execution.
■ Inter-Process Communication (IPC): Facilitating communication between
processes.
○ Memory Management:
■ Allocation and Deallocation: Managing memory allocation for processes and
deallocating it when no longer needed.
■ Virtual Memory: Extending physical memory using disk space.
■ Paging and Segmentation: Dividing memory into manageable sections.
○ File System Management:
■ File Creation and Deletion: Allowing users and programs to create and delete
files.
■ File Manipulation: Providing operations to read, write, and modify files.
■ Directory Management: Organizing files into directories for easier access and
management.
■ Access Control: Managing permissions to ensure secure file access.
○ Device Management:
■ Device Drivers: Providing drivers to interact with hardware devices.
■ Device Scheduling: Managing the use of devices among multiple processes.
■ Device Communication: Facilitating communication between the OS and
hardware devices.
○ Security and Access Control:
■ User Authentication: Verifying user identities to allow access.
■ Authorization: Granting or denying access to resources based on permissions.
■ Data Encryption: Encrypting data to protect against unauthorized access.
○ User Interface:
■ Command-Line Interface (CLI): Allowing users to interact with the OS using text
commands.
■ Graphical User Interface (GUI): Providing a visual interface with windows, icons,
and menus.
○ Networking:
■ Network Communication: Facilitating data exchange over network connections.
■ Protocol Implementation: Supporting network protocols like TCP/IP.
■ Remote Access: Allowing remote access to the system and its resources.
○ System Utilities and Application Support:
■ System Utilities: Providing tools for system maintenance and configuration.
■ Application Execution: Running and managing user applications.
○ Error Detection and Handling:
■ Error Logging: Recording errors for analysis and troubleshooting.
■ Fault Tolerance: Providing mechanisms to continue operation in case of errors.
○ Resource Allocation:
■ CPU Scheduling: Allocating CPU time to various processes.
■ Memory Allocation: Distributing memory among processes.
■ I/O Management: Managing input and output operations efficiently.
○ Performance Monitoring:
■ System Monitoring: Tracking system performance and resource usage.
■ Diagnostics: Providing tools to diagnose and resolve performance issues.

The 5 W’s on Runtime Environment


● Who
○ Who uses a runtime environment?
■ Developers: They use runtime environments to execute and test their code.
■ System Administrators: They manage and configure runtime environments for
applications to run smoothly.
■ End-users: Indirectly benefit from runtime environments as they execute the
software applications.
● What
○ What is a runtime environment?
■ A runtime environment is a software layer that provides the necessary services
and resources for applications to run. It manages the execution of code, handles
system calls, memory allocation, and input/output operations. Examples include
Java Runtime Environment (JRE), .NET Common Language Runtime (CLR), and
Node.js.
● When
○ When is a runtime environment used?
■ During the execution phase of an application. After code is written and compiled
(if necessary), it is run within the runtime environment.
■ Throughout the development process for testing and debugging.
● Where
○ Where is a runtime environment used?
■ On any device or system where an application needs to be executed, which could
be local machines, servers, or cloud platforms.
■ In various contexts such as development, testing, staging, and production
environments.
● Why
○ Why is a runtime environment important?
■ It abstracts the underlying hardware and operating system, providing a
consistent platform for applications to run.
■ Ensures compatibility and standardization across different systems.
■ Provides essential services such as memory management, security, and
exception handling, enabling developers to focus on writing application logic
rather than dealing with low-level system details.

System Call Groups


● Process Control
○ These system calls handle process creation, termination, and management.
■ fork(): Creates a new process by duplicating the current process.
■ exec(): Replaces the current process image with a new process image.
■ exit(): Terminates a process.
■ wait(): Waits for a process to change state (e.g., to finish execution).
■ kill(): Sends a signal to a process to terminate or interrupt it.
● File Management
○ These system calls handle file operations such as creation, deletion, reading, and writing.
■ open(): Opens a file for reading, writing, or both.
■ close(): Closes an open file descriptor.
■ read(): Reads data from a file.
■ write(): Writes data to a file.
■ lseek(): Moves the read/write file offset.
■ unlink(): Deletes a file.
■ stat(): Retrieves file status information.
● Device Management
○ These system calls manage device input and output.
■ ioctl(): Performs device-specific input/output operations.
■ read(): Reads data from a device.
■ write(): Writes data to a device.
● Information Maintenance
○ These system calls manage and retrieve system information.
■ getpid(): Gets the process ID of the calling process.
■ getuid(): Gets the user ID of the calling process.
■ gettimeofday(): Retrieves the current time.
■ sysinfo(): Retrieves system information.
● Communication
○ These system calls manage inter-process communication (IPC).
■ pipe(): Creates a unidirectional data channel that can be used for IPC.
■ shmget(): Allocates a shared memory segment.
■ shmat(): Attaches a shared memory segment to the address space of the calling
process.
■ msgget(): Creates or retrieves a message queue.
■ msgsnd(): Sends a message to a message queue.
■ msgrcv(): Receives a message from a message queue.
■ socket(): Creates a socket for network communication.
■ connect(): Initiates a connection on a socket.
● Protection
○ These system calls manage permissions and access control.
■ chmod(): Changes the permissions of a file.
■ chown(): Changes the ownership of a file.
■ umask(): Sets the file mode creation mask.

Common Network Ports


● 20 - FTP Data
● 21 - FTP Control
● 22 - SSH
● 23 - Telnet
● 25 - SMTP
● 53 - DNS
● 80 - HTTP
● 110 - POP3
● 143 - IMAP
● 161 - SNMP
● 162 - SNMP traps
● 443 - HTTPS

Network Device Hardening


● Network device hardening is the process of securing network devices, such as routers, switches,
firewalls, and access points, to reduce vulnerabilities and enhance their security posture.
○ Key Aspects:
■ Default Configuration Changes: Changing default configurations, including
passwords, usernames, and management ports. Using strong, unique passwords
and disabling unused services can significantly improve security.
■ Access Control: Implement strict access control measures, such as using role-
based access control (RBAC) to limit administrative privileges based on job
responsibilities.
■ Firmware and Software Updates: Regularly update firmware and software to
patch known vulnerabilities and improve overall security.
■ Disable Unused Services: Disable any unnecessary services and ports to reduce
the attack surface.
■ Logging and Monitoring: Enable logging and monitoring capabilities to track and
analyze network device activities.
■ Firewall and ACLs: Configure firewalls and access control lists (ACLs) to control
traffic flow and filter out malicious or unauthorized traffic.
■ Encryption: Use encryption protocols, such as SSH (Secure Shell) or HTTPS, for
remote management and communication to protect sensitive data from
eavesdropping and man-in-the-middle attacks.
■ Network Segmentation: Implement network segmentation to isolate critical
devices and segments from less secure parts of the network.
■ Regular Audits and Assessments: Conduct regular security audits and
assessments to identify vulnerabilities, compliance gaps, and areas for
improvement.
■ Backup and Recovery: Establish a backup and recovery plan for network device
configurations and critical data

Cloud Computing Concepts


● Rapid Elasticity:
○ Refers to the ability of a cloud system to quickly and efficiently scale resources up or
down based on demand. This means that cloud services can easily accommodate spikes
in usage without significant manual intervention.
● Resource Sharing/Resource Pooling:
○ A fundamental aspect of cloud computing where multiple users or tenants share a
common pool of computing resources such as servers, storage, and networks.
● Load Balancing:
○ A technique used in cloud computing to distribute incoming network traffic or workload
across multiple servers or resources. The goal is to optimize resource utilization,
maximize throughput, minimize response time, and avoid overloading any single
resource.
● Fault Tolerance:
○ Refers to the ability of a system or application to continue operating without disruption in
the event of a hardware or software failure.
● Distributed File System (DFS):
○ A file system that manages storage across multiple networked servers or nodes. In a
cloud computing context, DFS allows for the storage of large amounts of data across
distributed infrastructure. This enables high scalability, fault tolerance, and efficient data
access.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy