To Virtualization
To Virtualization
Server Virtualization:
This is the most common type, where multiple virtual servers (VMs)
run on a single physical server. This allows for better resource
utilization, improved server consolidation, and increased flexibility.
Desktop Virtualization:
In this approach, a user's desktop environment is hosted on a central
server, while the user accesses it remotely. This can improve security,
manageability, and disaster recovery.
Storage Virtualization:
This pools physical storage devices (like hard drives) into a single
virtual storage pool. This simplifies storage management, improves
utilization, and enhances data availability.
Network Virtualization:
This abstracts network functions (like routing and switching) from
physical hardware. It allows for more flexible network configurations,
improved performance, and better resource allocation.
Application Virtualization:
This isolates applications from the underlying operating system.
This enables applications to run on different operating systems
without modification, improves compatibility, and simplifies software
distribution.
Data Virtualization:
This presents a unified view of data from disparate sources
(databases, files, etc.). This simplifies data access, improves data
integration, and enhances data quality.
History of Virtualization:
1960s:
IBM's CP-40: This research project in 1967 demonstrated the
concept of a virtual machine, allowing multiple users to share a single
computer system.
IBM's VM/370: In 1972, this commercial virtual machine system
was released for the IBM System/370 mainframe, marking a
significant step towards practical virtualization.
1990s:
VMware: Founded in 1998, VMware pioneered server
virtualization for x86-based systems, making virtualization more
accessible to a wider audience.
Rise of Open Source: Open-source virtualization solutions like
Xen and KVM emerged, offering cost-effective alternatives.
2000s:
2010s:
Present:
Virtualization continues to evolve, with advancements in areas
like edge computing, serverless computing, and artificial
intelligence.
Virtualization Use Cases:
Server Virtualization
Components of a SAN:
1. Storage Devices – SANs use disk arrays, SSDs, and tape
libraries for storage.
2. Host Servers – Servers that access the SAN storage via
dedicated connections.
3. SAN Switches – Devices that manage traffic and connections
between storage and servers.
4. HBAs (Host Bus Adapters) – Interface cards in servers for
connecting to the SAN.
5. SAN Protocols – Common protocols include Fibre Channel (FC),
iSCSI, and Fibre Channel over Ethernet (FcoE).
Benefits of SAN:
Improved Performance: Dedicated storage network reduces
bottlenecks.
Better Storage Utilization: Centralized storage avoids wasted
space.
High Availability: Supports redundancy for disaster recovery.
Efficient Backup & Recovery: Snapshot and replication features
enhance data protection.
SAN vs NAS (Network-Attached Storage):
Feature SAN NAS
Access
Block-level File-level
Type
Performanc
High Moderate
e
Connectivit
Fibre Channel, iSCSI Ethernet (NFS, SMB)
y
Enterprise apps,
Use Case File sharing, backups
databases
Types:
Types:
Lossless compression: Reduces data size without losing any
information.
Lossy compression: Reduces data size by discarding some less
important information.
In Virtualization
Both deduplication and compression are particularly valuable in
virtualized environments:
Virtual machine images: Virtual machines often have many
identical files and data blocks. Deduplication can significantly
reduce the storage space required for multiple VMs.
Backups: Backups of virtual machines tend to contain a lot of
duplicate data. Deduplication and compression can make backups
more efficient and less storage-intensive.
Network Function Virtualization (NFV):
Network Function Virtualization (NFV) is a game-changer in the
world of networking. It's a way to design, deploy, and manage network
services by moving them from dedicated hardware to software running
on standard servers.
Breakdown of what NFV is all about:
Virtualization: NFV leverages virtualization technologies, similar
to how you might run multiple operating systems on one computer.
This allows network functions to be decoupled from the underlying
hardware.
Software-based: Network functions become software
applications, called Virtualized Network Functions (VNFs), that can
be deployed and managed flexibly.
Standard Hardware: VNFs run on commodity servers, which are
much cheaper and more versatile than specialized network
hardware.
Agility and Scalability: NFV enables faster deployment of new
services, easier scaling of resources, and greater flexibility in
managing network functions.
Importants of NFV:
Cost Reduction: NFV reduces the need for expensive, proprietary
hardware, leading to significant cost savings.
Faster Service Deployment: New network services can be
deployed much more quickly, as there's no need to wait for
hardware installation.
Increased Flexibility: NFV allows network operators to easily
scale resources up or down based on demand.
Improved Agility: Network functions can be easily updated and
modified, enabling faster innovation and response to changing
needs.
Key Components of NFV:
VNFs (Virtualized Network Functions): These are the software
applications that perform specific network functions, such as
routing, firewalling, or load balancing.
NFVI (Network Functions Virtualization Infrastructure): This is
the underlying hardware and software platform that hosts the
VNFs, including servers, storage, and networking resources.
MANO (Management and Orchestration): This framework is
responsible for managing the VNFs, including their deployment,
scaling, and lifecycle management.