0% found this document useful (0 votes)
24 views

CC Unit1

Uploaded by

u.ajaykumar7616
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

CC Unit1

Uploaded by

u.ajaykumar7616
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Cloud Computing

A Golden Era in Computing

 Powerful multi-core processors


 General purpose graphic processors
 Superior software methodologies
 Virtualization leveraging the powerful hardware
 Wider bandwidth for communication
 Proliferation of devices
 Explosion of domain applications

General view of cloud computing: I don’t care where my servers are, who
manages them, where my documents are stored, or where my applications are
hosted. I just want them always available and access them from any device
connected through Internet. And I am willing to pay for this service for as a
long as I need it.

Cloud computing is Internet-based computing, whereby shared resources,


software and information are provided to computers and other devices on-
demand, like the electricity grid. The cloud computing is a culmination of
numerous attempts at large scale computing with seamless access to virtually
limitless resources.

Definition: A Cloud is a type of parallel and distributed system consisting of a


collection of inter-connected and virtualised computers that are dynamically
provisioned and presented as one or more unified computing resources based
on service-level agreements established through negotiation between the
service provider and consumers.

Cloud Computing is often described as a stack, as a response to the broad


range of services built on top of one another under the moniker “Cloud”.
National Institute of Standards and Technology (NIST) offers up several
characteristics that it sees as essential for a service to be considered “Cloud”.
These characteristics include;
 On-demand self-service. The ability for an end user to sign up and
receive services without the long delays that have characterized
traditional IT
 Broad network access. Ability to access the service via standard
platforms (desktop, laptop, mobile etc)
 Resource pooling. Resources are pooled across multiple customers
 Rapid elasticity. Capability can scale to cope with demand peaks
 Measured Service. Billing is metered and delivered as a utility service

The Cloud Computing Stack:

 Infrastructure as a Service (IaaS):


o Definition: The hardware and software that power cloud services,
including servers, storage, networks, and operating systems.
o It is used directly used by end-users for various purposes such as
office automation, photo editing, customer relationship
management (CRM), and social networking.
o Examples: Amazon EC2, Amazon S3, Rightscale, vCloud.
 Platform as a Service (PaaS):
o Definition: Tools and services designed to facilitate coding, testing,
and deploying applications quickly and efficiently.
o It is used in runtime Environment for Applications Development
and Data Processing Platforms
o Examples: Windows Azure, Hadoop, Google AppEngine, Aneka.
 Software as a Service (SaaS):
o Definition: Applications designed for end-users and delivered over
the web.
o It is used in virtualized infrastructure components that provide
scalable resources for cloud services.
o Examples: Google Documents, Facebook, Flickr, Salesforce.

Clouds based on Ownership and Exposure

 Public Cloud:
o Definition: Public clouds utilize the internet to offer resources
such as applications and storage to the general public or
businesses on a pay-per-usage basis.
o Examples: Amazon Elastic Compute Cloud (EC2), IBM’s Blue Cloud,
Sun Cloud, Google AppEngine, Windows Azure Services Platform.
o Benefits:
 Economies of scale.
 Inexpensive setup as hardware, application, and bandwidth
costs are covered by the provider.
 Pay-per-usage model.
o Limitations:
 Limited configuration options.
 Security concerns, especially for sensitive data subject to
compliance regulations.
 Private Cloud:
o Definition: Data center architectures owned by a single company,
providing flexibility, scalability, provisioning, automation, and
monitoring, with the aim of retaining control over infrastructure.
o Usage: Typically used by large enterprises due to concerns around
security, compliance, and control.
o Benefits:
 Control over infrastructure.
 Security and compliance adherence.
o Challenges:
 Expense, with typically modest economies of scale.
 Not feasible for small-to-medium-sized businesses.
 Hybrid Cloud:
o Definition: Hybrid clouds combine elements of public and private
clouds, allowing companies to maintain control over certain
aspects while utilizing public cloud resources as needed.
o Use Cases:
 Cloud bursting: During peak periods, applications or parts of
applications can be migrated to the public cloud.
 Disaster Recovery (DR)/Business Continuity (BC):
Organizations can utilize cloud-based DR/BC services for
cost-effective failover solutions.
o Benefits:
 Flexibility to scale resources as needed.
 Cost-effective disaster recovery solutions.
o Considerations:
 Data transfer costs between private and public clouds.
 Integration and management complexities.
Cloud Service Provider

Windows Azure:

 Description: Enterprise-level on-demand capacity builder offering cycles


and storage available on request for a cost.
 Usage: Utilizes Azure API for working with Microsoft's infrastructure.
 Key Features:
o Web role and worker role for application deployment.
o Blob storage, table storage, and drive storage for data
management.

Amazon EC2:

 Description: Amazon EC2 is a comprehensive web service providing


instantiating computing instances with various supported operating
systems.
 Features:
o Provides API access for launching and managing computing
instances.
o Facilitates computations via Amazon Machine Images (AMIs) for
diverse use cases.
o Offers S3 for scalable storage, Cloud Management Console for
administration, and MapReduce Cloud for distributed computing.

Google App Engine:

 Description: Web interface providing a development environment for


designing, developing, and deploying Java and Python-based
applications.
 Features:
o Supports Java, Go, and Python programming languages.
o Offers reliability, availability, and scalability similar to Google's
own applications.
o Provides comprehensive programming platform regardless of
application size.
o Features templates and appspot for streamlined development and
management.

Benefits of Cloud Computing

 No up-front commitments
 On-demand access
 Nice pricing
 Simplified application acceleration and scalability
 Efficient resource allocation
 Energy efficiency
 Seamless creation and use of third-party services
 Large enterprises can offload some of their activities to cloud-based
systems.
 Small enterprises and start-ups can afford to translate their ideas into
business results more quickly, without excessive up-front costs
 System developers can concentrate on the business logic rather than
dealing with the complexity of infrastructure management and
scalability
 End users can have their documents accessible from everywhere and
any device.

Challenges Ahead in Cloud Computing:

1. Practical Aspects:
o Configuration, networking, and sizing of cloud computing systems
pose practical challenges.
o Dynamic provisioning of cloud computing services and resources
requires efficient management.
2. Technical Challenges:
o Cloud service providers face technical challenges in managing
large computing infrastructures.
o Utilizing virtualization technologies atop these infrastructures
adds complexity.
3. Security Concerns:
o Ensuring confidentiality, secrecy, and protection of data in a cloud
environment is a critical security challenge.
o Safeguarding data against unauthorized access and breaches
remains a priority.
4. Legal Issues:
o Legal considerations may arise concerning data ownership,
privacy, and compliance with regulatory frameworks.
o Addressing jurisdictional issues and ensuring compliance with laws
across different regions is crucial.

Core technologies that lead to cloud computing.


 Parallel and Distributed systems
 Service-oriented computing
 Virtualization

Parallel vs Distributed Systems

Term Parallel Computing Distributed


Computing
Definition Computation divided Computation broken
among processors down into units and
sharing the same executed concurrently on
memory. different computing
elements.
Tight Coupling Implies a tightly coupled Encompasses a wider
system class of systems,
including those that are
tightly coupled and
loosely coupled.
Includes architectures
with various computing
Typically involves elements like processors
processors sharing the on different nodes or
Architecture same memory. cores.
Encompasses broader
scenarios of concurrent
Focuses on computation execution across
among processors with different computing
Scope shared memory. elements.

Elements of Parallel Computing:

1. Physical Limits of Silicon-Based Processor Chips:


o Processing speed is constrained by the speed of light.
o Density of transistors packaged in a processor is limited by
thermodynamics.
2. Solution:
o Overcoming physical limitations by connecting multiple processors
to solve complex problems collaboratively.
3. Parallel Computing Development:
o Development of parallel computing techniques, architectures, and
systems to perform multiple activities simultaneously.
o High-performance computing requires Massive Parallel Processing
(MPP) systems with thousands of powerful CPUs.
4. Representation:
o Representative example of a computing system built using MPP:
C-DAC's Param Supercomputer.

What is Parallel Processing?

 Definition:
o Processing multiple tasks simultaneously on multiple processors.
 Parallel Program:
o Consists of multiple active processes (tasks) simultaneously
solving a given problem.
 Task Division:
o Divide-and-conquer technique divides a task into multiple
subtasks, processed on different CPUs.
 Parallel Programming:
o Programming on a multi-processor system using the divide-and-
conquer technique.
 Need for Parallel Processing:
o Many applications require more computing power than traditional
sequential computers can offer.
 Cost-Effective Solution:
o Increases the number of CPUs and adds efficient communication
between them, resulting in higher computing power and
performance.

Influencing Factors of Parallel Processing:

 Computational Requirements:
o Ever-increasing demand for computing power.
 Limitations of Sequential Architectures:
o Sequential CPUs reaching mechanical and physical limitations.
 Saturation of Sequential CPU Speed:
o Sequential CPU speed reaching saturation.
 Hardware Improvements:
o Non-scalable improvements in pipelining, super scalar, etc.,
requiring sophisticated compiler technology.
 Maturity of Parallel Processing Technology:
o Mature technology ready for commercial exploitation.
 Advancements in Networking Technology:
o Significant development in networking technology enabling
heterogeneous computing.

Hardware Architectures for Parallel Processing:

 Categories:
o Single-instruction, Single-data (SISD) systems
o Single-instruction, Multiple-data (SIMD) systems
o Multiple-instruction, Single-data (MISD) systems
o Multiple-instruction, Multiple-data (MIMD) systems

Shared Memory MIMD Machines:

 Description:
o All processing elements (PEs) connected to a single global memory
with shared access.
o Also known as tightly coupled multiprocessor systems.
 Communication:
o Communication between PEs occurs through the shared memory.
 Data Modification:
o Modification of data by one PE in the global memory is visible to
all other PEs.
 Examples:
o Dominant representative shared memory MIMD systems include
Silicon Graphics machines and Sun/IBM SMP (Symmetric Multi-
Processing).

Distributed Memory MIMD Machines:

 Description:
o Each PE has its own local memory.
o Also referred to as loosely coupled multiprocessor systems.
 Communication:
o Communication between PEs occurs through the interconnection
network, inter-process communication channel, or IPC.
 Network Configuration:
o Network connecting PEs can be configured into various topologies
like tree, mesh, cube, etc.
 Operation:
o Each PE operates asynchronously.
o Communication/synchronization among tasks is facilitated by
exchanging messages between PEs.

Shared Memory MIMD vs Distributed Memory MIMD

Shared Memory MIMD Distributed Memory


Aspect Model MIMD Model
May require more
complex programming
due to communication
Programming Ease Easier to program overhead
More tolerant to failures
as failures in one PE do
not affect the entire
Fault Tolerance Less tolerant to failures system
Extensibility Harder to extend Easier to extend
Less likely to scale due to More scalable as each PE
Scalability memory contention has its own memory
Popularity Less popular today More popular today

Approaches to Parallel Programming

The three most prominent approaches are:

 Data Parallelism: The divide-and-conquer technique is used to split data


into multiple sets, and each data set is processed on different PEs using
the same instruction(SIMD)
 Process Parallelism: A given operation has multiple (but distinct)
activities that can be processed on multiple processors
 Farmer-and-worker model : One processor is configured as master and
all other remaining PEs are designated as slaves; the master assigns jobs
to slave PEs and, on completion, they inform the master, which in turn
collects results
Laws of Caution:

1. Ideal Speed Increase:


o Expectation: With n processors, the user expects speed to
increase by n times.
o Reality: Communication overhead often prevents ideal speed
increases.
2. Cost versus Speed:
o Relationship: Speed of computation is proportional to the square
root of system cost.
o Implication: Speed and cost do not increase linearly; as the system
becomes faster, it becomes more expensive to increase its speed.

Cost versus Speed Formula:

 Speed increase by a parallel computer is represented as y = k * log(N),


where N is the number of processors.

General Concepts and Definitions:


1. Distributed Computing:
o Studies models, architectures, and algorithms for building and
managing distributed systems.
2. Definition of Distributed System (Tanenbaum's Proposal):
o A distributed system is a collection of independent computers that
appears to its users as a single coherent system.
o Components located at networked computers communicate and
coordinate their actions only by passing messages.
3. Communication in Distributed Systems:
o Components communicate and coordinate their actions using
message passing.
o Message passing encompasses several communication models.

Components of Distributed System

1. Hardware:
o The physical infrastructure of the distributed system, including
servers, network switches, routers, and other networking
equipment.
o Hardware components provide the underlying computing
resources and connectivity necessary for the system to function.
2. Operating System:
o The software layer that manages hardware resources and
provides services to applications.
o In a distributed system, the operating system may need to support
distributed computing features such as process management,
communication, synchronization, and resource allocation across
multiple nodes.
3. Middleware:
o Middleware acts as a bridge between the operating system and
the application layer.
o It provides services and abstractions that simplify the
development of distributed applications, such as communication
protocols, distributed object management, transaction support,
and security mechanisms.
o Examples of middleware include message brokers, distributed
databases, remote procedure call (RPC) frameworks, and publish-
subscribe systems.
4. Application Layer:
o The top layer of the distributed system, where user-facing
applications and services reside.
o Distributed applications are built on top of the middleware layer
and interact with it to access distributed resources, exchange
data, and coordinate activities across multiple nodes.
o Examples of distributed applications include web services, online
social networks, cloud-based storage systems, and collaborative
editing platforms.

Examples of Distributed Systems

 Banking systems
 Communication - email
 Distributed information systems
o WWW
o Federated Databases
 Manufacturing and process control
 Inventory systems
 General purpose (university, office automation)

1. Architectural Styles for Distributed Computing:


1. Role of Middleware Layer:
 Middleware enables distributed computing by providing a
coherent and uniform runtime environment for
applications.
 It organizes the components of a distributed system and
defines its architecture.
2. Organization of Components:
 Different ways exist to organize the components of a
middleware environment.
3. Use of Standards:
 Well-known standards at the operating system, hardware,
and network levels facilitate easy integration of
heterogeneous components into a coherent system.
 Standards ensure seamless interaction between devices.
4. Design Patterns:
 Design patterns help in structuring components within an
application and understanding its internal organization.
 They contribute to creating a common knowledge base
among software engineers and developers.
5. Classification of Architectural Styles:
 Architectural styles are classified into two major classes:
 Software Architectural Styles: Relate to the logical
organization of software components.
 System Architectural Styles: Describe the physical
organization of distributed software systems in terms
of their major components.

Software Architectural Styles:

o Definition: Based on the logical arrangement of software


components.
o Benefits:
 Provide an intuitive view of the whole system, regardless of
its physical deployment.
 Identify main abstractions used to shape system
components and expected interaction patterns between
them.

System Architectural Styles:

o Definition:
 System architectural styles encompass the physical
organization of components and processes across a
distributed infrastructure.
o Reference Styles:
 System architectural styles include two fundamental
reference styles:

a. Client/Server:

 Centralized information and services accessed through a


single access point: the server.
 Multiple clients access services provided by the server,
which must efficiently handle requests from different
clients.
b. Peer-to-Peer (P2P):

 Symmetric architectures where all components, called


peers, have equal roles.
 Each peer incorporates both client and server capabilities of
the client/server model.
 Peers act as servers when processing requests from other
peers and as clients when issuing requests themselves.
Communication Technologies for Distributed computing

 Message passing: The entities exchanging information explicitly encode


in the form of a message the data to be exchanged
 Remote Procedure Call (RPC): A remote process hosts a server
component, thus allowing client processes to request the invocation of
methods, and returns the result of the execution.
 Distributed Object Frameworks:
o This is an implementation of the RPC model for the object-
oriented paradigm and contextualizes this feature for the remote
invocation of methods exposed by objects
o Common Object Request Broker Architecture (CORBA):
o Distributed Component Object Model (DCOM/COM+)
o Remote Method Invocation(RMI)
o NET Remoting
 Webservices: Web service technology provides an implementation of
the RPC concept over HTTP, thus allowing the interaction of components
that are developed with different technologies.
Remote Procedure Call

A Remote Procedure Call (RPC) is a mechanism that allows a program to


execute procedures or functions on a remote system, as if they were local to
the calling program. The concept is similar to invoking a function within the
same program, but in this case, the function resides on a different machine.

Here's how it typically works:

1. Client-Server Communication: In an RPC system, there are two


main components: the client and the server. The client initiates
the RPC request, while the server processes the request and
returns a response.
2. Marshalling: When the client makes an RPC request, the
parameters of the procedure to be executed are serialized into a
format that can be transmitted over the network. This process is
called marshalling.
3. Network Communication: The marshalled request is then
transmitted over the network to the server.
4. Unmarshalling: Upon receiving the request, the server
deserializes (unmarshals) the parameters and executes the
requested procedure or function.
5. Execution: The server executes the procedure using the provided
parameters.
6. Result Marshalling: After executing the procedure, the server
marshals the result into a format that can be transmitted over the
network.
7. Response Transmission: The marshalled result is transmitted back
to the client over the network.
8. Result Unmarshalling: Upon receiving the response, the client
deserializes (unmarshals) the result and continues its execution.

Distributed Object Framework

Distributed object frameworks extend the principles of object-oriented


programming to distributed systems, allowing objects to be distributed across
a network while maintaining coherence as if they were in the same address
space. Here's how they typically work:
1. Server Process:
 The server process maintains a registry of active objects
that are available to other processes.
 Active objects can be published using interface definitions
or class definitions, depending on the specific
implementation.
2. Client Process:
 The client process obtains a reference to the active remote
object using a given addressing scheme.
 This reference is represented by a pointer to an instance
that conforms to a shared type of interface and class
definition.
3. Method Invocation:
 The client process invokes methods on the active object by
calling them through the reference obtained earlier.
 Parameters and return values are marshaled (serialized) as
in the case of Remote Procedure Calls (RPC).

Examples of distributed object frameworks include:

o Common Object Request Broker Architecture (CORBA):


 CORBA provides a platform-independent and language-
independent infrastructure for distributed objects to
communicate with each other.
 It includes features for object location, object activation,
and object persistence.
o Distributed Component Object Model (DCOM/COM+):
 DCOM, part of the Microsoft Component Object Model
(COM), enables software components to communicate over
a network.
 COM+ extends DCOM with additional features such as
transaction support and object pooling.
o Remote Method Invocation (RMI):
 RMI is a Java-specific framework that allows Java objects to
invoke methods on remote Java objects.
 It provides a mechanism for distributed objects to
communicate and interact within a Java Virtual Machine
(JVM) environment.
o .NET Remoting:
 .NET Remoting is a .NET framework for building distributed
applications.
 It provides a way for objects to communicate across
application domains, processes, and even network
boundaries in the .NET environment.

Service Oriented Architecture (SOA):

Service-oriented computing organizes distributed systems into services, which


are the primary abstraction for building systems. SOA expresses applications
and software systems as aggregations of services coordinated within a service-
oriented architecture.

Key Concepts:

1. Service:
 A service encapsulates a software component that provides
a set of coherent and related functionalities. These
functionalities can be reused and integrated into larger
applications.
2. Heterogeneity and Extensibility:
 Distributed systems are meant to be heterogeneous,
extensible, and dynamic. Services are the most popular
abstraction for designing complex and interoperable
systems in such environments.

SOA as an Architectural Style:

SOA is an architectural style that supports service orientation by organizing a


software system into a collection of interacting services. It encompasses design
principles that structure system development and provide means for
integrating components into a coherent and decentralized system.

Roles in SOA:

o Service Provider:
 The service provider is responsible for maintaining the
service and making it available for others to use.
 Providers advertise services by publishing them in a registry,
along with a service contract specifying the nature of the
service, how to use it, requirements, and fees.

Features of SOA:
4. Standardized Service Contract:
 Services adhere to a standardized contract specifying their
functionality, interfaces, and usage.
5. Loose Coupling:
 Services are loosely coupled, allowing them to evolve
independently without affecting other services.
6. Abstraction:
 Services hide implementation details, providing a high-level
abstraction of functionality.
7. Reusability:
 Services can be reused across multiple applications,
reducing development effort and promoting consistency.
8. Lack of State:
 Services are stateless, meaning they do not maintain client
state between interactions.
9. Discoverability:
 Services can be discovered and invoked dynamically,
allowing clients to locate and use services as needed.
10.Interoperability:
 Services can interact and collaborate seamlessly, even if
they are implemented using different technologies or
platforms.

Webservices: Web services are the prominent technology for implementing


SOA systems and applications. They leverage Internet technologies and
standards for building distributed systems.

Virtualization:

Virtualization technology is a fundamental component of cloud computing,


particularly for infrastructure-based services. It enables the creation of secure,
customizable, and isolated execution environments for running applications,
even untrusted ones, without impacting other users' applications. Here's an
overview of virtualization:

Major Causes for Diffusion of Virtualization:

1. Increased Performance and Computing Capacity: Virtualization allows


for better utilization of hardware resources, leading to increased
performance and computing capacity.
2. Underutilized Hardware and Software Resources: Virtualization helps
utilize underused hardware and software resources more efficiently.
3. Lack of Space: Virtualization addresses physical space constraints by
consolidating multiple virtual machines onto fewer physical servers.
4. Greening Initiatives: Virtualization contributes to environmental
sustainability efforts by reducing energy consumption and carbon
footprint.
5. Rise of Administrative Costs: Virtualization reduces administrative
overhead by simplifying resource management and provisioning.

Characteristics of Virtualized Environments:

1. Components: Virtualized environments consist of three main


components: guest, host, and virtualization layer.
2. Increased Security: Virtualization enhances security by providing
controlled execution environments, preventing harmful operations from
affecting the host system.
3. Sharing: Virtualization enables the creation of separate computing
environments within the same host, maximizing resource utilization.
4. Aggregation: Virtualization allows multiple hosts to be aggregated and
represented as a single virtual host, streamlining resource management.
5. Emulation: Virtualization enables the emulation of different
environments, facilitating testing and validation of guest programs on
various platforms or architectures.
6. Isolation: Virtualization ensures isolation between guest entities,
preventing interference and providing a secure execution environment.
7. Performance Tuning: Virtualization allows fine-tuning of resources to
optimize performance and meet service-level agreements (SLAs).
8. Portability: Virtualized environments offer portability, enabling virtual
machines or application components to be moved and executed across
different platforms or virtualization implementations.
Taxonomy of Virtualization Techniques:

Virtualization techniques can be classified based on the service or entity being


emulated and the type of host they require. Here's a taxonomy focusing on
execution virtualization techniques:

1. Based on Emulated Entity:


o Virtualization can be used to emulate various entities, including
execution environments, storage, and networks.

a. Execution Virtualization:

o Emulates execution environments, allowing multiple operating


systems or applications to run concurrently on the same
hardware.

b. Storage Virtualization:

o Emulates storage resources, allowing for the abstraction and


management of physical storage devices and providing logical
storage units to applications.

c. Network Virtualization:

o Emulates network resources, allowing for the creation of virtual


networks that operate independently of the physical network
infrastructure.
2. Based on Type of Host:
o Virtualization techniques can be categorized based on whether
they are implemented on top of an existing operating system
(process-level) or directly on hardware (system-level).
a. Process-Level Virtualization:

o Implemented on top of an existing operating system.


o The host operating system has full control of the hardware.
o Examples include containerization technologies like Docker and
application-level virtual machines like Java Virtual Machine (JVM).

b. System-Level Virtualization:

o Implemented directly on hardware or with minimal support from


the host operating system.
o Does not rely on a full-fledged operating system for management.
o Examples include hypervisor-based virtualization technologies like
VMware ESXi, Microsoft Hyper-V, and Kernel-based Virtual
Machine (KVM).

Machine Reference Model:

The machine reference model provides a hierarchical abstraction of a


computing system, defining various layers from hardware to software
interfaces. Here's an overview of the components of the machine reference
model:

1. Instruction Set Architecture (ISA):


o At the bottom layer of the model is the Instruction Set
Architecture (ISA), which defines the instruction set for the
processor, including operations such as arithmetic, logic, and
memory access.
o ISA serves as the interface between hardware and software.
o It is crucial for both the operating system (OS) developer (System
ISA) and developers of applications that interact directly with the
hardware (User ISA).
2. Application Binary Interface (ABI):
o The ABI sits above the ISA layer and serves to separate the
operating system layer from applications and libraries.
o It covers details such as low-level data types, memory alignment,
and calling conventions.
o ABI defines a format for executable programs and facilitates
interoperability between different software components.
o System calls, which allow user-level processes to request services
from the operating system kernel, are defined at this level.
3. Application Programming Interface (API):
o At the highest level of abstraction is the Application Programming
Interface (API).
o APIs provide a set of functions and protocols that allow
applications to interact with libraries, frameworks, and the
underlying operating system.
o APIs serve as an interface between applications and the rest of the
software stack, enabling developers to build software solutions
without needing to understand the underlying hardware
intricacies.

Hierarchy of Abstraction:

 The machine reference model follows a hierarchical structure, with each


layer building upon the one below it.
 ISA defines the hardware-level instructions and operations directly
executed by the processor.
 ABI defines the binary interface between the operating system and
applications, ensuring compatibility and interoperability.
 API provides a high-level interface for application developers,
abstracting away hardware and operating system details.

Importance:

 The machine reference model is essential for software development,


providing a standardized framework for hardware-software interaction.
 It enables portability, interoperability, and abstraction, allowing
developers to write code that can run across different hardware
architectures and operating systems.

Execution Virtualization: Hardware-Level Virtualization (System


Virtualization)

Hardware-level virtualization is a technique that abstracts the execution


environment of computer hardware, allowing guest operating systems to run
on top of it. This model employs a hypervisor, also known as a virtual machine
manager (VMM), which creates and manages virtual machines (VMs) on the
physical hardware. Below are key components and characteristics of hardware-
level virtualization:

1. Components:
o Host: Represents the physical computer hardware on which
virtualization is implemented.
o Virtual Machine Manager (Hypervisor): A program or
software/hardware combination responsible for abstracting the
underlying physical hardware and managing virtual machines.
o Virtual Machine: Represents the emulation of a guest operating
system running on top of the hypervisor.
o Guest: Refers to the operating system installed within a virtual
machine, which interacts with the virtualized hardware.
2. Types of Hypervisors:
o Type I Hypervisor (Native): Operates directly on hardware
without the need for a host operating system. It interacts directly
with the hardware's Instruction Set Architecture (ISA) interface.
Also known as bare-metal hypervisors.
o Type II Hypervisor (Hosted): Requires the support of an operating
system to provide virtualization services. It interacts with the host
operating system through the Application Binary Interface (ABI)
and emulates virtual hardware for guest operating systems.
3. Virtual Machine Manager Internals:
o The VMM is internally organized into modules:
 Dispatcher: Routes instructions issued by the virtual
machine instance to other modules.
 Allocator: Determines system resources allocated to virtual
machines.
 Interpreter: Executes interpreter routines triggered when a
virtual machine executes privileged instructions.
4. Goldberg-Popek Criteria:
o To efficiently support virtualization, the VMM must satisfy three
properties:
 Equivalence: The behavior of a guest under the control of
the VMM should match its behavior when executed directly
on physical hardware.
 Resource Control: The VMM should have complete control
over virtualized resources.
 Efficiency: A significant majority of machine instructions
should be executed without intervention from the VMM for
optimal performance.

Hardware Virtualization Techniques

1. Hardware-Assisted Virtualization:
o Description: Hardware-assisted virtualization involves
architectural support provided by the hardware for running a
virtual machine manager (VMM) capable of isolating and running
guest operating systems.
o Examples: Intel VT (Vanderpool) and AMD V (Pacifica) are
extensions to the x86-64 bit architecture that enable hardware-
assisted virtualization.
o Advantages: It enables efficient and secure execution of guest
operating systems with minimal overhead.
2. Full Virtualization:
o Description: Full virtualization allows running a program, typically
an operating system, directly on a virtual machine without any
modifications, mimicking its execution on raw hardware. The
VMM provides complete emulation of underlying hardware.
o Advantages: Provides complete isolation, leading to enhanced
security, ease of emulation of different architectures, and
coexistence of different systems on the same platform.
3. Paravirtualization:
o Description: Paravirtualization exposes a slightly modified
software interface to the virtual machine, requiring modifications
to the guest operating system. It allows demanding performance-
critical operations to be executed directly on the host.
o Usage: Commonly explored in open-source and academic
environments.
o Advantages: Prevents performance losses experienced in
managed execution by enabling direct execution of critical
operations on the host.
4. Partial Virtualization:
o Description: Partial virtualization provides partial emulation of
underlying hardware, not allowing complete execution of the
guest operating system in isolation. It may support many
applications transparently but might lack support for all operating
system features.
o Example: Address space virtualization used in time-sharing
systems is an example of partial virtualization.
o Usage: Used when complete isolation of guest operating systems
is not necessary or feasible.

Other Virtualization Techniques

1. Operating System-Level Virtualization:


o Description: Operating system-level virtualization creates
separate execution environments within a single operating
system, allowing multiple isolated user space instances
concurrently.
o Example: Evolution of the chroot mechanism in Unix systems,
where the file system root directory for a process and its children
is changed to a specific directory.
2. Programming Language-Level Virtualization:
o Description: Programming language-level virtualization executes
bytecode of a program within a virtual machine, providing ease of
deployment, managed execution, and portability across different
platforms.
o Example: Virtual machines executing bytecode compiled from
programming languages such as Java (JVM) or .NET (Common
Language Runtime).
3. Application-Level Virtualization:
o Description: Application-level virtualization allows running
applications in runtime environments that do not natively support
all required features, without installing them in the expected
environment.
o Example: Wine, a software application enabling Unix-like
operating systems to execute programs written for the Microsoft
Windows platform.
4. Storage Virtualization:
o Description: Storage virtualization decouples the physical
organization of hardware from its logical representation, allowing
data to be identified using a logical path.
o Example: Storage Area Network (SAN).
5. Network Virtualization:
o Description: Network virtualization combines hardware
appliances and software to create and manage virtual networks,
aggregating different physical networks into a single logical
network.
o Example: Virtual LANs (VLANs).
6. Desktop Virtualization:
o Description: Desktop virtualization abstracts the desktop
environment available on a personal computer, providing access
to it using a client/server approach, with the system stored
remotely and accessed through a network connection.
o Examples: Windows Remote Services, VNC, X Server.
7. Application Server Virtualization:
o Description: Application server virtualization abstracts a collection
of application servers to provide the same services as a single
virtual application server, using load-balancing strategies and
high-availability infrastructure.
o Purpose: Provides better quality of service rather than emulating
a different environment.

Virtualization and Cloud Computing

 Role of Virtualization in Cloud Computing:


o Virtualization is crucial in cloud computing as it enables
customizable, secure, isolated, and manageable IT services on
demand.
o It allows for configurable computing environments and storage,
catering to Infrastructure-as-a-Service (IaaS) and Platform-as-a-
Service (PaaS) offerings in cloud computing.
 Benefits of Virtualization in Cloud Computing:
o Customization and Sandbox Environment: Virtualization provides
customizable and sandboxed environments, attracting businesses
with large computing infrastructures.
o Efficiency through Consolidation: Virtualization enables server
consolidation, reducing underutilized resources by aggregating
virtual machines.
o Live Migration: Virtualization allows for live migration of virtual
machine instances without disruption, facilitating efficient
resource management.
o Storage Virtualization: Complementary to execution
virtualization, storage virtualization offers easily partitionable
virtual storage services.
o Revamped Desktop Virtualization: Cloud computing redefines
desktop virtualization, offering complete virtual computing stacks
accessible via thin clients over the internet.
 Pros and Cons of Virtualization:
o Advantages:
 Managed Execution and Isolation: Virtualization ensures
managed execution and isolation of resources, enhancing
security.
 Portability and Self-Containment: Virtualization enables
migration and reduces maintenance costs by lowering the
number of physical hosts.
 Efficient Resource Utilization: Virtualization leads to more
efficient resource usage, resulting in energy savings and
reduced environmental impact.
o Disadvantages:
 Performance Decrease: Virtualization may lead to a
performance decrease in guest systems due to
intermediation by the virtualization layer.
 Suboptimal Host Utilization: Abstraction introduced by
virtualization management software can lead to suboptimal
host utilization.
 Security Implications: Emulation of different execution
environments in virtualization can pose security risks.

Virtualization using KVM

Introduction:

 KVM (Kernel-based Virtual Machine) is a full virtualization solution for


Linux on various architectures.
 Integrated with QEMU, KVM allows running multiple guest operating
systems and is managed using the libvirt API.

Architecture:

 KVM adds virtualization capabilities to the Linux kernel, making each


virtual machine a regular Linux process.
 Virtual machines run as multi-threaded Linux processes and are
controlled by tools like virt-manager and virsh.
 KVM utilizes hardware virtualization to virtualize processor states and
handles memory management within the kernel.
 I/O operations are handled primarily through QEMU in user space.

Virtualization Features Supported by KVM:

1. Overcommitting: Allocating more virtualized CPUs or memory than


available resources, dynamically swapping resources as required.
2. Kernel Same-page Merging (KSM): Enables sharing of identical memory
pages among KVM guests, reducing memory duplication.
3. QEMU Guest Agent: Allows the host machine to issue commands to the
guest operating system.
4. Disk I/O Throttling: Sets limits on disk I/O requests from individual
virtual machines to prevent over-utilization of shared resources.
5. Automatic NUMA Balancing: Improves application performance on
NUMA hardware systems by moving tasks closer to memory access
points.
6. Virtual CPU Hot Add: Increases processing power on running virtual
machines without shutting down the guests.
7. Nested Virtualization (Technology Preview): Enables KVM guests to act
as hypervisors and create their own guests.

Components of a Typical KVM Installation:

 Device driver for managing virtualization hardware, exposing capabilities


via /dev/kvm.
 User-space component for emulating PC hardware, usually handled by a
modified QEMU process.
 I/O model derived from QEMU's, supporting features like copy-on-write
disk images.

Emergence of Web 2.0

Internet and Web Applications

1. Birth of the Internet (1990):


o The development of the HTTP protocol and HTML by Tim Berners-
Lee at CERN marked the birth of the Internet.
o These technologies allowed information to be accessed
anonymously by the public, leading to the creation of the World
Wide Web (WWW).
2. Fundamentals of Internet-based Applications:
o Internet-based applications rely primarily on HTTP and HTML
standards.
o Browsers and servers implement these standards to enable
content publishing and retrieval over the Internet.
o XML and SOAP also play important roles in facilitating
communication and data exchange over the web.
3. Early Development and Challenges:
o In the initial years of the web, HTML's support for data entry
forms provided opportunities to develop browser-based
interfaces for legacy systems.
o CGI programs were developed to communicate with mainframes,
enabling the publishing of information from legacy systems to the
web.
o The browser became a universal client, simplifying the task of
upgrading user desktops and enabling geographically distributed
access to applications.
4. Limitations and Advancements:
o A major disadvantage was the limited user interface capabilities of
plain HTML, restricting the richness of web applications.
o However, advancements such as AJAX (Asynchronous JavaScript
and XML) have significantly mitigated this limitation.
o AJAX enables the development of rich internet applications (RIAs)
by allowing asynchronous data exchange between the browser
and server, leading to a more interactive and dynamic user
experience.

Web Application Servers

 In a web-enabled application architecture, processing logic (including


database access), takes place outside the web server (via scripts or
programs).
 Each invocation included the overhead of launching the required server
program as a fresh operating-system process.
 Java made it possible to execute application functionality inside the
web-server process (threads), leading the birth of the application server
architecture.
 Requests could also be processed bymulti-threaded execution
environments, called containers
 The servlet container allowed Java programs to execute in a multi-
threaded manner within the server process as a servlet code.
 The container would also manage load balancing across incoming
requests using these threads, as well as database connection pooling,
similar to TP monitors.
 So, application-server arch also enjoys the advantages of 3-tier
architecture: the ability to handle larger workloads as compared to the
client-server model

Internet of Services:

 Once applications became web-enabled, it became natural to open up


access to some of their functionality to the general public.
 A web service is an interoperable machine-to-machine interaction over
HTTP.
 The data format and resource naming are done by XML and URI.
 The XML-RPC standard mimics remote procedure calls over HTTP with
data being transferred in XML.
 To support complex nested (object oriented) types the SOAP protocol is
developed, whereby the schema of the messages exchanged is defined
by WSDL (web serviced description language).
 Using SOAP application could call the web services published by other
applications over the internetIn 2000, XMLHTTPRequest was made
available in the Javascript language.
 Javascript was being used to provide dynamic user interface behaviour
within HTML pages, such as simple validations.
 Using XMLHTTPRequest it became possible to make HTTP requests,
possibly to servers other than the one that served up the main HTML
page being displayed.
 Thus came a new approach for integrating applications at the client side
instead of between servers: AJAX

AJAX

AJAX (Asynchronous JavaScript and XML) revolutionized web development by


allowing applications to access and interact with web services directly from the
client-side, rather than relying solely on server-side processing. Here are the
main points regarding AJAX and mashups:

AJAX:

o Web services facilitate interoperable machine-to-machine


interaction over HTTP.
o XML and URIs are used for data format and resource naming,
respectively.
o The XML-RPC standard enables remote procedure calls over HTTP
with data transferred in XML.
o SOAP protocol supports complex nested types, with message
schema defined by WSDL.
o XMLHTTPRequest, introduced in JavaScript in 2000, enables
making HTTP requests from the client-side.
o JavaScript was initially used for providing dynamic user interface
behavior within HTML pages.
o AJAX allowed applications to make HTTP requests to servers other
than the one serving the main HTML page, enabling integration at
the client-side.
o AJAX allows web pages to update content asynchronously,
without requiring full page reloads.

Mashups:

o Mashups take integration a step further by combining data from


multiple sources, including remote web services, into a single user
interface.
o With AJAX, JavaScript user interfaces can directly call various web
services.
o Mashups include the presentation layer for remote services along
with the service itself.
o For example, using the Google search service as a mashup,
developers can simply reference and use JavaScript code provided
by Google to incorporate a search box into their web pages.
o This JavaScript code, provided by Google, offers methods
(referred to as classes) that developers can call to instantiate and
render the search control dynamically within HTML pages after
they have loaded.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy