0% found this document useful (0 votes)
90 views24 pages

MC5501 Cloud Computing

This document discusses the topics that will be covered in a Cloud Computing course across 5 units. Unit 1 discusses cloud architecture and models, including cloud characteristics, services, and deployment models. Unit 2 covers virtualization basics, types, and implementation levels. Unit 3 discusses cloud infrastructure design and enabling technologies for IoT. Unit 4 examines programming models for cloud computing. Finally, Unit 5 focuses on security challenges in the cloud. The outcomes of the course are also listed and include comparing cloud strengths and limitations, applying concepts like virtualization, and addressing issues such as security and privacy in cloud computing.

Uploaded by

sajithabanu banu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views24 pages

MC5501 Cloud Computing

This document discusses the topics that will be covered in a Cloud Computing course across 5 units. Unit 1 discusses cloud architecture and models, including cloud characteristics, services, and deployment models. Unit 2 covers virtualization basics, types, and implementation levels. Unit 3 discusses cloud infrastructure design and enabling technologies for IoT. Unit 4 examines programming models for cloud computing. Finally, Unit 5 focuses on security challenges in the cloud. The outcomes of the course are also listed and include comparing cloud strengths and limitations, applying concepts like virtualization, and addressing issues such as security and privacy in cloud computing.

Uploaded by

sajithabanu banu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

MC5501 - CLOUD COMPUTING

Presented By
Prof.S.Sajithabanu M.Tech.,(Ph.D).,
UNIT I - CLOUD ARCHITECTURE AND MODEL

• Technologies for Network-Based System – System


Models for Distributed and Cloud Computing – NIST
Cloud Computing Reference Architecture. Cloud
Models:- Characteristics – Cloud Services – Cloud
models (IaaS, PaaS, SaaS) – Public vs Private Cloud
–Cloud Solutions - Cloud ecosystem – Service
management – Computing on demand.
UNIT II - VIRTUALIZATION

• Basics of Virtualization - Types of Virtualization -


Implementation Levels of Virtualization -
Virtualization Structures - Tools and Mechanisms -
Virtualization of CPU, Memory, I/O Devices -
Virtual Clusters and Resource management –
Virtualization for Data-center Automation
UNIT III - CLOUD INFRASTRUCTURE AND IoT

• Architectural Design of Compute and Storage Clouds


– Layered Cloud Architecture Development – Design
Challenges - Inter Cloud Resource Management –
Resource Provisioning and Platform Deployment –
Global Exchange of Cloud Resources-Enabling
Technologies for the Internet of Things – Innovative
Applications of the Internet of Things.
UNIT IV - PROGRAMMING MODEL

• Parallel and Distributed Programming Paradigms –


MapReduce, Twister and Iterative MapReduce –
Hadoop Library from Apache – Mapping
Applications - Programming Support - Google App
Engine, Amazon AWS - Cloud Software
Environments -Eucalyptus, Open Nebula,
OpenStack, Aneka, CloudSim.
UNIT V - SECURITY IN THE CLOUD

• Security Overview – Cloud Security Challenges


and Risks – Software-as-a-Service Security –
Security Governance – Risk Management –
Security Monitoring – Security Architecture
Design – DataSecurity – Application Security –
Virtual Machine Security - Identity Management
and Access Control – Autonomic Security
OUTCOMES

Compare the strengths and limitations of cloud computing


Identify the architecture, infrastructure and delivery models of cloud
computing
Apply suitable virtualization concept.
Choose the appropriate cloud player, Programming Models and approach.
Address the core issues of cloud computing such as security, privacy and
interoperability.
Design Cloud Services and Set a private cloud
Technologies for Network-Based System
• The different technologies for a Network-based system are as follows.

• 1. Multicore CPUs and Multithreading Technologies

• -(i) Advances in CPU Processors.

• (ii) Multicore CPU and Many-Core GPU Architectures.

• (iii) Multithreading Technology.

• 2. GPU Computing to Exascale and Beyond.


Technologies for Network-Based System(Cont..)

• 3. Memory, Storage, and Wide-Area Networking.

• 4. Virtual Machines and Virtualization Middleware

• 5. Data Center Virtualization for Cloud Computing

• (i) Data Center Growth and Cost Breakdown

• (ii) Low-Cost Design Philosophy

• (iii) Convergence of Technologies.


Multicore CPUs and Multithreading Technologies

• Consider the growth of component and network technologies over the


past 30 years. They are crucial to the development of HPC and HTC
systems. In Figure 1.4, processor speed is measured in millions of
instructions per second (MIPS) and network bandwidth is measured
in megabits per second (Mbps) or gigabits per second (Gbps).
The unit GE refers to 1 Gbps Ethernet bandwidth.
Figure 1.4
Advances in CPU Processors.

• Today, advanced CPUs or microprocessor chips assume a multicore architecture


with dual, quad, six, or more processing cores. These processors exploit
parallelism at ILP and TLP levels.

• Processor speed growth is plotted in the upper curve in Figure 1.4 across
generations of microprocessors or CMPs. We see growth from 1 MIPS for the
VAX 780 in 1978 to 1,800 MIPS for the Intel Pentium 4 in 2002, up to a 22,000
MIPS peak for the Sun Niagara 2 in 2008.

• As the figure shows, Moore’s law has proven to be pretty accurate in this case.
The clock rate for these processors increased from 10 MHz for the Intel 286 to 4
GHz for the Pentium 4 in 30 years.
System Models for Distributed and
Cloud Computing
Classification of Distributed Computing Systems
• These can be classified into 4 groups: clusters, peer-to-peer networks, grids, and
clouds.
• A computing cluster consists of interconnected stand-alone computers which work
cooperatively as a single integrated computing resource. The network of compute
nodes are connected by LAN/SAN and are typically homogeneous with distributed
control running Unix/Linux. They are suited to HPC.
Peer-to-peer (P2P) Networks
• In a P2P network, every node (peer) acts as both a client and server. Peers act
autonomously to join or leave the network. No central coordination or central database is
needed. No peer machine has a global view of the entire P2P system. The system is self-
organizing with distributed control.

• Unlike the cluster or grid, a P2P network does not use dedicated interconnection network.

• P2P Networks are classified into different groups:

Distributed File Sharing: content distribution of MP3 music, video, etc. E.g. Gnutella,
Napster, BitTorrent.

Collaboration P2P networks: Skype chatting, instant messaging, gaming etc.

Distributed P2P computing: specific application computing such as SETI@home provides 25


Tflops of distributed computing power over 3 million Internet host machines.

Computational and Data Grids
• Grids are heterogeneous clusters interconnected by high-speed networks. They
have centralized control, are server-oriented with authenticated security. They are
suited to distributed supercomputing. E.g. TeraGrid.

• Like an electric utility power grid, a computing grid offers an infrastructure that
couples computers, software/middleware, people, and sensors together.

• The grid is constructed across LANs, WANs, or Internet backbones at a regional,


national, or global scale.

• The computers used in a grid include servers, clusters, and supercomputers. PCs,
laptops, and mobile devices can be used to access a grid system.
Clouds
• A Cloud is a pool of virtualized computer resources. A cloud can host a variety of
different workloads, including batch-style backend jobs and interactive and user-
facing applications.

• Workloads can be deployed and scaled out quickly through rapid provisioning of
VMs. Virtualization of server resources has enabled cost effectiveness and allowed
cloud systems to leverage low costs to benefit both users and providers.

• Cloud system should be able to monitor resource usage in real time to enable
rebalancing of allocations when needed.

• Cloud computing applies a virtualized platform with elastic resources on demand by


provisioning hardware, software, and data sets dynamically. Desktop computing is
moved to a service-oriented platform using server clusters and huge databases at
datacenters.
Advantage of Clouds over Traditional Distributed Systems

• Traditional distributed computing systems provided for on-premise


computing and were owned and operated by autonomous administrative
domains (e.g. a company).

• These traditional systems encountered performance bottlenecks, constant


system maintenance, poor server (and other resource) utilization, and
increasing costs associated with hardware/software upgrades.

• Cloud computing as an on-demand computing paradigm resolves or


relieves many of these problems.
Software Environments for Distributed Systems and Clouds:

Service-Oriented Architecture (SOA) Layered Architecture


• In web services, Java RMI, and CORBA, an
entity is, respectively, a service, a Java CORBA Stack RMI Stack Web Services Stack
remote object, and a CORBA object. These IDL Java WSDL
build on the TCP/IP network stack. On top interface
of the network stack we have a base
software environment, which would be CORBA Services RMI UDDI
Registry
.NET/Apache Axis for web services, the JVM
for Java, and the ORB network for CORBA. CORBA RMI SOAP Message
On top of this base environment, a higher Stubs/Skeletons Stubs/Skelet
level environment with features specific to ons
the distributed computing environment is CDR binary Java native XML Unicode encoding
built. encoding encoding -
serialization

• Loose coupling and support of IIOP JRMP HTTP


heterogeneous implementations make RPC or Message Oriented Middleware (Websphere MQ or
services more attractive than distributed JMS)
objects.
ORB JVM .NET/Apache Axis
TCP/IP/DataLink/Physical
Service-Oriented Architecture (SOA) Layered Architecture: SOAP vs REST services

• REST supports many data formats, whereas SOAP only allows XML.
• REST supports JSON (smaller data formats and offers faster parsing compared to XML
parsing in SOAP which is slower).
• REST provides superior performance, particularly through caching for information
that’s not altered and not dynamic.
• REST is used most often for major services such as Amazon, Twittter.
• REST is generally faster and uses less bandwidth.
• SOAP provides robust security through WS-Security and so useful for enterprise apps
such as banking and financial apps; REST only has SSL.
• SOAP offers built-in retry logic to compensate for failed communications.
Performance Metrics and Scalability Analysis
• Performance Metrics:
• CPU speed: MHz or GHz, SPEC benchmarks like SPECINT
• Network Bandwidth: Mbps or Gbps
• System throughput: MIPS, TFlops (tera floating-point operations per second), TPS
(transactions per second), IOPS (IO operations per second)
• Other metrics: Response time, network latency, system availability

• Scalability:
• Scalability is the ability of a system to handle growing amount of work in a
capable/efficient manner or its ability to be enlarged to accommodate that growth.
• For example, it can refer to the capability of a system to increase total throughput
under an increased load when resources (typically hardware) are added.
Scalability
Scale Vertically
To scale vertically (or scale up) means to add resources to a single node in a system,
typically involving the addition of CPUs or memory to a single computer.

Tradeoffs
There are tradeoffs between the two models. Larger numbers of computers means
increased management complexity, as well as a more complex programming model
and issues such as throughput and latency between nodes.
Also, some applications do not lend themselves to a distributed computing model.
In the past, the price difference between the two models has favored "scale up"
computing for those applications that fit its paradigm, but recent advances in
virtualization technology have blurred that advantage, since deploying a new virtual
system/server over a hypervisor is almost always less expensive than actually buying
and installing a real one.
Scalability
• One form of scalability for parallel and distributed systems is:
• Size Scalability
This refers to achieving higher performance or more functionality by increasing the
machine size. Size in this case refers to adding processors, cache, memory, storage, or
I/O channels.
• Scale Horizontally and Vertically
Methods of adding more resources for a particular application fall into two broad
categories:

Scale Horizontally
To scale horizontally (or scale out) means to add more nodes to a system, such as
adding a new computer to a distributed software application. An example might be
scaling out from one Web server system to three.

The scale-out model has created an increased demand for shared data storage with
very high I/O performance, especially where processing of large amounts of data is
required.
Amdahl’s Law
It is typically cheaper to add a new node to a system in order to achieve improved
performance than to perform performance tuning to improve the capacity that each
node can handle. But this approach can have diminishing returns as indicated by
Amdahl’s Law.

Consider the execution of a given program on a uniprocessor workstation with a total


execution time of T minutes. Now, let’s say that the program has been parallelized or
partitioned for parallel execution on a cluster of many processing nodes.

Assume that a fraction α of the code must be executed sequentially, called the
sequential block. Therefore, (1 - α ) of the code can be compiled for parallel execution
by n processors. The total execution time of program is calculated by:
α T + (1 - α ) T / n

where the first term is the sequential execution time on a single processor and the
second term is the parallel execution time on n processing nodes.

All system or communication overhead is ignored here. The I/O and exception
handling time is also not included in the speedup analysis.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy