0% found this document useful (0 votes)
30 views28 pages

CCS Module 3 NOTES

The document discusses cloud computing and its various service models, including public, private, and hybrid clouds, highlighting their architectures, advantages, and examples. It also covers data center design, interconnection networks, and the cost models associated with traditional IT versus cloud computing. Additionally, it outlines cloud services such as IaaS, PaaS, and SaaS, emphasizing the shift towards a service-oriented economy and the benefits of cloud adoption for businesses.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views28 pages

CCS Module 3 NOTES

The document discusses cloud computing and its various service models, including public, private, and hybrid clouds, highlighting their architectures, advantages, and examples. It also covers data center design, interconnection networks, and the cost models associated with traditional IT versus cloud computing. Additionally, it outlines cloud services such as IaaS, PaaS, and SaaS, emphasizing the shift towards a service-oriented economy and the benefits of cloud adoption for businesses.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

CLOUD COMPUTING & SECURITY (BIS613D)

Module 3- Cloud Platform Architecture over Virtualized Datacenters


Cloud Computing and Service Models, Data
Center Design and Interconnection Networks,
Architectural Design of Compute and Storage
Module 3 Syllabus Clouds, Public Cloud Platforms, AWS and
Azure, Inter-Cloud Resource Management

Handouts for Session 17: Cloud Computing and Service Models

✓ Over the past two decades, the world economy has rapidly moved from
manufacturing to more service-oriented.
✓ Cloud computing benefits the service industry most and advances business
computing with a new paradigm.
✓ Developers of innovative cloud applications no longer acquire large capital
equipment in advance. They just rent the resources from some large data centers
that have been automated for this purpose.

PUBLIC CLOUD:

➢ A public cloud is built over the Internet and can be accessed by any user who has paid
for the service. Public clouds are owned by service providers and are accessible through a
subscription.
➢ The providers of the aforementioned clouds are commercial providers that offer a
publicly accessible remote interface for creating and managing VM instances within their
proprietary infrastructure.
➢ A public cloud delivers a selected set of business processes. The application and
infrastructure services are offered on a flexible price-per-use basis.
Examples:
1. Google App Engine (GAE)
2. Amazon Web Services (AWS)
3. Microsoft Azure
4. IBM Blue Cloud
5. Salesforce.com’s Force.com.

Advantages:
1. Standardization
2. Preserves Capital Investment
3. Offers Application Flexibility

PRIVATE CLOUD:

➢ A private cloud is built within the domain of an intranet owned by a single organization.
It is client-owned and managed, and its access is limited to the owning clients and their
partners.

Page 1
CLOUD COMPUTING & SECURITY (BIS613D)

➢ Its deployment was not meant to sell capacity over the Internet through publicly
accessible interfaces.
➢ Private clouds give local users a flexible and agile private infrastructure to run service
workloads within their administrative domains.
➢ A private cloud is supposed to deliver more efficient and convenient cloud services. It
may impact cloud standardization while retaining greater customization and organizational
control.

Examples:
• IBM RC2
• Amazon Virtual Private Cloud
• VMware Private Cloud
• Rackspace Private Cloud (Powered by OpenStack)
• CloudBees

Advantages:
• Customization & offers higher efficiency
• Resiliency
• Security
• Privacy

Page 2
CLOUD COMPUTING & SECURITY (BIS613D)

HYBRID CLOUDS:

➢ A hybrid cloud is built with both public and private clouds Private clouds can also
support a hybrid cloud model by supplementing local infrastructure with computing capacity
from an external public cloud.
A hybrid cloud provides access to clients, the partner network, and third parties. Hybrid
clouds operate in the middle, with many compromises in terms of resource sharing.
Example:
➢ Research Compute Cloud (RC2) is a private cloud, built by IBM, that interconnects the
computing and IT resources at eight IBM Research Centers scattered throughout the United
States, Europe, and Asia.

Data-Center Networking Structure


• Cloud architecture relies on server clusters as compute nodes, with control nodes for
monitoring and managing activities.
• User job scheduling necessitates the creation of virtual clusters.
• Gateway nodes serve as external access points and enhance security for the cloud
platform.
• Unlike traditional grids, clouds are designed for fluctuating workloads, dynamically
allocating resources.
• Private clouds can effectively manage resource demands if designed adequately.
• Data centers typically scale with thousands to millions of servers, exemplified by
Microsoft's data center with 100,000 servers in containers.
• Data centers differ from supercomputers in using commodity networks like IP-based
Ethernet, while supercomputers use specialized high-bandwidth networks.
• NASA is developing a private cloud for climate modeling, offering cost savings and
enhanced capabilities for researchers.
• CERN operates a large private cloud to distribute resources and data to a global
community of scientists.
• Different service level agreements (SLAs) may be necessary to cater to varying
performance, data protection, and security needs for cloud services.

Page 3
CLOUD COMPUTING & SECURITY (BIS613D)

Cloud Ecosystem and Enabling Technologies:


• Cloud computing platforms offer significant differences from traditional computing
platforms.
• Conventional computing requires purchasing hardware and software, configuration,
testing, and ongoing resource management, with obsolescence occurring
approximately every 18 months.
• The cloud computing model operates on a pay-as-you-go basis, resulting in
substantial cost reductions as users rent resources rather than buy them.
• Cloud computing can lead to savings of 80% to 95%, which is particularly
beneficial for small businesses that need limited computing power without the
burden of recurring large investments.
• IBM anticipated that the global cloud service market could reach $126 billion by
2012, encompassing various services and infrastructure.
• Internet clouds function as service factories, built around multiple data centers,
facilitating the cloud computing ecosystem and its enabling technologies.
• Understanding these aspects helps demystify cloud computing and encourages
broader adoption by removing existing barriers.

Cloud Design Objectives: The following list highlights six design objectives for cloud
computing:
1. Shifting computing from desktops to data centers Computer processing, storage,
and software delivery is shifted away from desktops and local servers and toward
data centers over the Internet.
2. Service provisioning and cloud economics Providers supply cloud services by
signing SLAs with consumers and end users. The services must be efficient in terms
of computing, storage, and power consumption. Pricing is based on a pay-as-you-
go policy.
3. Scalability in performance The cloud platforms and software and infrastructure
services must be able to scale in performance as the number of users increases.

Page 4
CLOUD COMPUTING & SECURITY (BIS613D)

4. Data privacy protection Can you trust data centers to handle your private data and
records?This concern must be addressed to make clouds successful as trusted
services.
5. High quality of cloud services The QoS of cloud computing must be standardized
to make clouds interoperable among multiple providers.
6. New standards and interfaces This refers to solving the data lock-in problem
associated with data centers or cloud providers. Universally accepted APIs and
access protocols are needed to provide high portability flexibility for
virtualized applications

COST MODEL:

1) In traditional IT computing, users must acquire their own computer and peripheral
equipment as capital expenses. In addition, they have to face operational
expenditures in operating and maintaining the computer systems, including personnel
and service costs. The addition of variable operational costs on top of fixed capital
investments in traditional IT. The fixed cost is the main cost, and that it could be
reduced slightly as the number of users increases. The operational costs may increase
sharply with a larger number of users. Therefore, the total cost escalates quickly with
massive numbers of users.

2) Cloud computing applies a pay-per-use business model, in which user jobs are
outsourced to data centers. To use the cloud, one has no up-front cost in hardware
acquisitions. Only variable costs are experienced by cloud users, Overall, cloud
computing will reduce computing costs significantly for both small users and large
enterprises. Computing economics does show a big gap between traditional IT users
and cloud users. The savings in acquiring expensive computers up front releases a lot
of burden for startup companies.

Cloud Ecosystems
a. The emergence of Internet clouds has created an ecosystem of providers, users, and
technologies centered around public clouds.
b. There is rising interest in open-source cloud computing tools that enable
organizations to construct their own Infrastructure as a Service (IaaS) cloud using
internal resources.
c. Private and hybrid clouds are complementary to public clouds, allowing remote
access via web service interfaces similar to Amazon EC2.
d. Sotomayor et al. outline four levels of private cloud ecosystem development: user
demand for flexible platforms, cloud management providing virtualized resources,

Page 5
CLOUD COMPUTING & SECURITY (BIS613D)

virtual infrastructure (VI) management allocating VMs across server clusters, and
VM management overseeing VMs on individual hosts.
e. There is a need for a flexible and open architecture to facilitate the creation of
private/hybrid clouds, with VI management playing a crucial role.
f. Examples of VI tools include oVirt, VMware's vSphere, and Platfom Computing's
VM Orchestrator.
g. These tools offer capabilities such as dynamic placement, VM management, load
balancing, server consolidation, and infrastructure resizing.
h. In addition to established public clouds, open-source tools like Eucalyptus and
Globus Nimbus support virtualization.
i. Cloud management interfaces include Amazon EC2WS, Nimbus WSRF, and
ElasticHost REST, while OpenNebula and VMware vSphere assist with
comprehensive VM generation management.

Surge of Private Clouds


a. Private clouds utilize existing IT infrastructure within organizations, contrasting with
public clouds that manage workloads without communication dependency.
b. Both private and public clouds distribute data and virtual machine resources;
however, private clouds optimize workload balance for better resource efficiency on the
intranet.
c. Private clouds excel in preproduction testing and enforcing data privacy and
security policies more effectively than public clouds.
d. Public clouds primarily benefit users by eliminating capital expenses related to IT
hardware, software, and personnel.
e. Many companies begin with computing machine virtualization to reduce operating
costs and pursue policy-driven management to enhance quality of service (QoS).
f. Major corporations, such as Microsoft, Oracle, and SAP, integrate virtualized data
centers and IT resources to provide IT as a service, enhancing operational agility.
g. This strategy allows companies to avoid frequent server replacements, significantly
improving IT efficiency.

Page 6
CLOUD COMPUTING & SECURITY (BIS613D)

CLOUD SERVICES:
1.Infrastructure as a Service (IaaS):

1) This model allows users to use virtualized IT resources for computing, storage, and
networking. The service is performed by rented cloud infrastructure.
2) The user can deploy and run his applications over his chosen OS environment. The
user does not manage or control the underlying cloud infrastructure, but has control
over the OS, storage, deployed applications, and possibly select networking
components.
3) This IaaS model encompasses storage as a service, compute instances as a service,
and communication as a service.
4) Many startup cloud providers have appeared in recent years. GoGrid, FlexiScale, and
Aneka are good examples.

2.Platform as a Service (PaaS):

1. To be able to develop, deploy, and manage the execution of applications using


provisioned resources demands a cloud platform with the proper software
environment.
2. Such a platform includes operating system and runtime library support. This has
triggered the creation of the PaaS model to enable users to develop and deploy
their user applications.
3. The platform cloud is an integrated computer system consisting of both hardware
and software infrastructure.
4. The user application can be developed on this virtualized cloud platform using
some programming languages and software tools supported by the provider (e.g.,
Java, Python, .NET). The user does not manage the underlying cloud
infrastructure. The cloud provider supports user application

Page 7
CLOUD COMPUTING & SECURITY (BIS613D)

3 Software as a Service (SaaS):

1. This refers to browser-initiated application software over thousands of cloud


customers. Services and tools offered by PaaS are utilized in construction of
applications and management of their deployment on resources offered by IaaS
providers.
2. The SaaS model provides software applications as a service. As a result, on the
customer side, there is no upfront investment in servers or software licensing.
3. On the provider side, costs are kept rather low, compared with conventional
hosting of user applications. Customer data is stored in the cloud that is either
vendor proprietary or publicly hosted to support PaaS and IaaS.
4. Examples of SaaS:
5. Google Gmail and docs
6. Microsoft SharePointa
7. CRM software from Salesforce.com.

Page 8
CLOUD COMPUTING & SECURITY (BIS613D)

Session 17 questions:
1. What is Public Cloud?
2. What is Private Cloud?
3. What is the difference between Public cloud and Hybrid cloud?
4. What is PaaS?
5. What is the difference between SaaS and PaaS?

Page 9
CLOUD COMPUTING & SECURITY (BIS613D)

Handouts for Session 18: DATA-CENTER DESIGN AND INTERCONNECTION


NETWORKS
• A data center is often built with a large number of servers through a huge
interconnection network.

Warehouse-Scale Data-Center Design


1. Dennis Gannon states that the cloud relies on large data centers.
2. A typical data center can be as expansive as a shopping mall, encompassing 11
times the area of a football field and accommodating between 400,000 to 1
million servers.
3. Economies of scale dictate that larger data centers have a lower unit cost, leading
to reduced operational expenses.
4. For instance, a 400-server data center incurs basic operational costs of
approximately $13 per Mbps for networking, $0.4 per GB for storage, and
additional administration costs, which are higher compared to a smaller 1,000-
server data center.
5. The operational costs for smaller data centers are significantly greater—about
seven times higher for network costs and 5.7 times higher for storage costs.
6. Microsoft operates around 100 data centers, varying in size, worldwide.

Data-Center Construction Requirements


1. Most data centers utilize off-the-shelf components, including multicore CPUs,
DRAM, and disk drives.
2. A typical data center configuration includes 2,000 servers, each featuring 8 GB
of DRAM and four 1 TB disk drives.
3. Servers connect to rack-level switches via 1 Gbps links, with additional ports
linking to cluster-level switches.
4. Bandwidth estimates indicate local disks provide 200 MB/s, while off-rack disks
offer 25 MB/s through shared uplinks.
5. The cluster's total disk storage vastly exceeds local DRAM capacity, resulting in
significant latency, bandwidth, and capacity discrepancies for large applications.
6. Data centers operate on a scale where failures, both hardware and software, of
1% of nodes are common.
7. Common hardware failures may include those affecting CPUs, disk I/O, and
networking components; power outages can halt data center operations.
8. Software can also contribute to failures, necessitating reliable data retention
strategies.
9. To maintain reliability, redundant hardware is recommended, and multiple data
copies should be kept in different locations to ensure accessibility during
failures.

Cooling System of a Data-Center Room


1. The under-floor area in data centers is primarily used for cool air distribution to
server racks.
2. The Computer Room Air Conditioning (CRAC) unit pressurizes the raised floor
plenum, distributing cold air through perforated tiles in front of the racks.
3. Racks are organized in alternating cold and hot aisles to enhance cooling
efficiency.

Page 10
CLOUD COMPUTING & SECURITY (BIS613D)

4. The data center's cooling system, while simpler than the power setup, relies on a
steel grid to manage air flow.
5. Hot air from servers is recirculated back to CRAC units for cooling, then
returned to the raised floor plenum.
6. Typical incoming coolant temperatures range from 12–14°C, with warm coolant
directed to a chiller.
7. Newer data centers may utilize cooling towers for pre-cooling condenser water
loop fluid using a separate system for heat dissipation.

Data-Center Interconnection Networks


• A critical core design of a data center is the interconnection network among all
servers in the datacenter cluster. This network design must meet five special
requirements: low latency, high bandwidth, low cost, message-passing interface
(MPI) communication support, and fault tolerance.
• The design of an inter-server network must satisfy both point-to-point and
collective communication patterns among all server nodes.

Application Traffic Support


a. The network topology must accommodate all MPI communication patterns,
supporting both point-to-point and collective MPI communications.
b. High bisection bandwidth is essential to meet communication requirements.
c. One-to-many communications are critical for enabling distributed file access,
utilizing metadata master servers to interact with slave server nodes in the
cluster.
d. The network design should facilitate rapid execution of the map and reduce
functions essential for the MapReduce programming paradigm.
e. Overall, the network structure should effectively support diverse traffic patterns
required by user applications.

Network Expandability
a. The network topology should support all MPI communication patterns,
enabling both point-to-point and collective communications.

Page 11
CLOUD COMPUTING & SECURITY (BIS613D)

b. High bisection bandwidth is necessary to satisfy communication demands.


c. One-to-many communications are vital for distributed file access, using
metadata master servers to coordinate with slave server nodes.
d. Network design must allow for swift execution of map and reduce functions,
which are crucial for the MapReduce programming model.
e. The network structure must accommodate various traffic patterns required by
user applications.

Fault Tolerance and Graceful Degradation


a. Interconnection networks must enable fault tolerance to handle link or
switch failures.
b. Establish multiple paths between server nodes in a data center for better
reliability.
c. Server fault tolerance achieved through data and computing replication
among redundant servers.
d. Network redundancy should apply both in software and hardware to manage
potential failures.
e. Software layers must recognize and avoid using broken links for packet
forwarding.
f. Network drivers should function seamlessly to maintain cloud operations
during failures.
g. In case of failures, the system should degrade gracefully with minimal node
impact.
h. Hot-swappable components are preferred to enhance maintenance
flexibility.
i. Eliminate critical paths or single points of failure that could jeopardize the
entire system.
j. Network design innovations largely focus on the topology structure,
typically featuring two layers:
k. A lower layer close to end servers.
l. An upper layer that provides backbone connections among server clusters.
m. The hierarchical interconnection model supports modular container-based
data center setups.

Switch-centric Data-Center Design


1) Two main approaches to data-center network design: switch-centric and
server-centric.
2) Switch-centric networks connect server nodes using switches, requiring no
changes to the servers.
3) Server-centric networks necessitate modifications to the server's operating
system, including special drivers to manage traffic.
4) Switch organization remains crucial for establishing connections in both
designs.

Session 17 questions:
1. How many servers can a typical large data center accommodate?
2. What percentage of nodes in a data center commonly experience failures
3. What is the purpose of redundant hardware in a data center?
4. What component in a data center pressurizes the raised floor plenum for cooling.
5. What are the five special requirements of a data-center interconnection network?

Page 12
CLOUD COMPUTING & SECURITY (BIS613D)

Handouts for Session 19: DATA-CENTER DESIGN AND INTERCONNECTION


NETWORKS (Contd.)

Modular Data Center in Shipping Containers


a. Modern data centers resemble a shipyard, consisting of server clusters in truck-
towed containers.
b. The SGI ICE Cube modular data center houses multiple server racks within a
single container, accommodating 46,080 processing cores or 30 PB of storage.
c. An array of fans circulates heated air through a heat exchanger to cool the air for
continuous operation.
d. Modular container-based data centers are designed for lower power
consumption, higher computer density, and increased mobility for relocation.
e. Sophisticated cooling technology can reduce cooling costs by up to 80%
compared to traditional warehouse data centers.
f. Efficient cooling mechanisms include chilled air circulation and cold-water flow
through heat exchange pipes.
g. Site selection for data centers prioritizes lower leases, cheaper electricity, and
efficient cooling conditions.
h. Both warehouse-scale and modular data centers are essential, with modular
containers enabling large-scale configurations similar to shipping yards.
i. Considerations for data integrity, server monitoring, and security management
are critical and are typically easier in centralized data centers.

Container Data-Center Construction


a. Container-based data centers are housed in truck-towable modules, incorporating
network, computer, storage, and cooling systems.
b. Enhancements in cooling efficiency are required, achievable through better
management of water and airflow.
c. Construction may follow a phased approach: starting with an individual server,
progressing to a rack system, and finally a complete container system.
d. Time and costs vary, with a rack of 40 servers taking approximately half a day,
while a full container system for 1,000 servers necessitates careful planning of
floor space, power, networking, cooling, and testing.
e. Containers must be designed for weather resistance and easy transportation.
f. Modular data centers support cloud applications, particularly beneficial for the
health care sector, which can deploy units at clinic sites.
g. Challenges arise in synchronizing information exchange with central databases
and maintaining data consistency within a hierarchical structure.
h. Security considerations for collocation cloud services may involve multiple data
center locations.
i. Container-based data-center modules are meant for construction of even larger
data centers using a farm of container modules. Some proposed designs of
container modules are presented in this section.
j. Their interconnections are shown for building scalable data centers. The
following example is a server-centric design of the data-center module.

Interconnection of Modular Data Centers

Page 13
CLOUD COMPUTING & SECURITY (BIS613D)

a. The BCube is utilized within server containers, which serve as fundamental units
in data centers.
b. An additional networking layer is necessary for interconnection among multiple
containers.
c. The MDCube network topology, proposed by Wu et al., facilitates intercontainer
connections using BCube networks.
d. MDCube employs high-speed switches to connect various BCube containers,
forming a virtual hypercube structure at the container level.
e. A 2D MDCube configuration can be derived from nine BCube1 containers.
f. This architecture supports large-scale data centers, enhancing cloud application
communication patterns.
g. For detailed implementation and simulation analysis of MDCube, readers should
refer to the specified article.

Data-Center Management Issues


Here are basic requirements for managing the resources of a data center. These suggestions
have resulted from the design and operational experiences of many data centers in the IT
and service industries.
• Making common users happy the data center should be designed to provide quality service
to the majority of users for at least 30 years.
• Controlled information flow Information flow should be streamlined. Sustained services
and high availability (HA) are the primary goals.
• Multiuser manageability The system must be managed to support all functions of a data
center, including traffic flow, database updating, and server maintenance.
• Scalability to prepare for database growth the system should allow growth as workload
increases. The storage, processing, I/O, power, and cooling subsystems should be scalable.
• Reliability in virtualized infrastructure Failover, fault tolerance, and VM live migration
should be integrated to enable recovery of critical applications from failures or disasters.
• Low cost to both users and providers The cost to users and providers of the cloud system
built over the data centers should be reduced, including all operational costs.
• Security enforcement and data protection Data privacy and security defense mechanisms
must be deployed to protect the data center against network attacks and system interrupts
and to maintain data integrity from user abuses or network attacks.
• Green information technology Saving power consumption and upgrading energy
efficiency are in high demand when designing and operating current and future data centers.

Session 19 Questions:
1. What is the What is the main advantage of modular container-based data centers?
2. What type of cooling mechanism is commonly used in modular data centers?
3. Why is scalability important in data center management?
4. What is the primary function of intercontainer networking in modular data centers?
5. Which network topology is used for interconnecting modular data centers?

Page 14
CLOUD COMPUTING & SECURITY (BIS613D)

Handouts for Session 20: ARCHITECTURAL DESIGN OF COMPUTE AND STORAGE


CLOUDS

A Generic Cloud Architecture Design


An Internet cloud is envisioned as a public cluster of servers provisioned on demand to
perform collective web services or distributed applications using data-center resources.

Cloud Platform Design Goals


1. Major design goals of a cloud computing platform include scalability,
virtualization, efficiency, and reliability.
2. Supports Web 2.0 applications, managing user requests by identifying resources
and provisioning services.
3. Must accommodate both physical and virtual machines while addressing security
concerns in shared environments.
4. Aim to establish a vast HPC infrastructure with integrated hardware and software
for operational efficiency.
5. Benefits from a cluster architecture that allows easy scaling by adding servers and
bandwidth as needed.
6. Enhances reliability through data redundancy, allowing data access even if a data
center fails.
7. Scalability of cloud architecture can be achieved by expanding server count and
network connectivity.

Enabling Technologies for Clouds


- Key driving forces of cloud computing include:
a. Ubiquity of broadband and wireless networking
b. Decreasing storage costs
c. Improvements in Internet computing software
- Benefits for cloud users:
a. Demand more capacity during peak times
b. Reduce overall costs
c. Experiment with new services
d. Eliminate unneeded capacity
- Advantages for service providers:
a. Increased system utilization through:
b. Multiplexing
c. Virtualization
d. Dynamic resource provisioning
e. Cloud technology advancements stem from progress in hardware, software,
and networking.

Page 15
CLOUD COMPUTING & SECURITY (BIS613D)

• Advances in multicore CPUs, memory chips, and disk arrays facilitate the creation
of faster data centers with extensive storage.
• Resource virtualization allows for quick deployment of cloud services and aids in
disaster recovery.
• Service-oriented architecture (SOA) is essential for cloud computing.
• Development in Software as a Service (SaaS), Web 2.0 standards, and improved
Internet performance have contributed to cloud service proliferation.
• Modern cloud infrastructures are designed to accommodate numerous tenants and
manage large data volumes.
• Large-scale, distributed storage systems serve as the backbone of contemporary
data centers.
• Recent improvements in license management and automated billing enhance the
efficiency of cloud computing.

A Generic Cloud Architecture

Page 16
CLOUD COMPUTING & SECURITY (BIS613D)

1. Security-aware cloud architecture utilizes a cluster of servers that can be


provisioned as needed for web services and applications.
2. Cloud platforms dynamically create or remove servers, software, and database
resources, which can include both physical machines and virtual machines.
3. User interfaces allow consumers to request services, and provisioning tools
manage the cloud system to fulfill these requests.
4. Distributed storage and services complement the cloud infrastructure, typically
maintained by third-party providers, abstracting the underlying technologies
from consumers.
5. Software as a service model underpins cloud computing, requiring a trust
framework for managing vast amounts of data.
6. A distributed file system is necessary to manage large-scale data within the
cloud.
7. Additional components, like storage area networks, database systems, and
security devices, are integrated into the cloud platform.
8. Web service providers offer APIs for developers to leverage cloud capabilities,
while monitoring units track resource usage and performance.
9. Resource management and maintenance in a cloud platform are automated, with
systems detecting status changes of servers.
10. Major providers like Google and Microsoft operate numerous data centers
globally, often located for optimal power efficiency and cooling.
11. Private clouds offer easier management, while public clouds provide easier
access, with a trend toward hybrid clouds to accommodate diverse applications.
12. Security remains a pivotal concern across all types of cloud services.

Layered Cloud Architectural Development


1. Cloud architecture consists of three layers: infrastructure, platform, and
application.
2. These layers utilize virtualization and standardization for hardware and software
resources in the cloud.
3. Services are delivered to users via networks across public, private, and hybrid
clouds.
4. The infrastructure layer is established first, supporting Infrastructure as a
Service (IaaS) offering.
5. The platform layer builds on the infrastructure to enable Platform as a Service
(PaaS) capability.
6. The application layer is developed on top of the platform to facilitate Software
as a Service (SaaS).
7. The infrastructure layer incorporates virtualized computing, storage, and
networking resources for user flexibility.
8. Virtualization enhances automated resource provisioning and optimizes
infrastructure management.
9. The platform layer focuses on general-purpose software resources, allowing
users to develop, test, and monitor applications.

Page 17
CLOUD COMPUTING & SECURITY (BIS613D)

10. The layer provides an environment for application development, operational


testing, and performance monitoring.
11. Users expect scalability, dependability, and security from the platform.
12. Acts as middleware between the infrastructure and application layers of the
cloud.
13. The application layer includes software modules for SaaS applications, handling
office management, information retrieval, document processing, calendar
management, and authentication services.
14. Heavily utilized by enterprises in marketing, sales, CRM, financial transactions,
and supply chain management.
15. Some applications utilize resources across multiple layers, highlighting
interdependence.
16. Service layers demand varying levels of support from providers:
• SaaS requires the most,
• PaaS is intermediate,
• IaaS requires the least.
Example: Amazon EC2 offers virtualized CPU and resource management,
while Salesforce.com provides hardware, software, and development tools
at various layers.

Session 20 Questions:
1. What are the three main cloud service models?
2. What does SaaS stand for in cloud computing?
3. What is the purpose of virtualization in cloud computing?
4. Which layer of cloud architecture provides computing, storage, and networking
resources?
5. What is the function of a distributed file system in cloud computing?

Page 18
CLOUD COMPUTING & SECURITY (BIS613D)

Handouts for Session 21: ARCHITECTURAL DESIGN OF COMPUTE AND STORAGE


CLOUDS (Contd.)

Market-Oriented Cloud Architecture


a. As cloud computing needs grow, consumers demand reliable quality of service
(QoS) from providers to support their operations.
b. Cloud providers establish specific QoS parameters for each consumer through
negotiated service level agreements (SLAs).
c. Traditional resource management systems are inadequate; a market-oriented
approach is necessary for effective supply and demand regulation.
d. Designers must implement economic incentives for both consumers and providers
to enhance QoS-based resource allocation.
e. Potential cost savings for providers may lower prices, fostering a competitive
market.
f. A high-level architecture for market-oriented resource allocation includes:
g. Users or brokers submitting service requests to data centers or cloud services.
h. A service level agreement (SLA) resource allocator interfaces between service
providers and users/brokers.
i. A service request examiner that evaluates QoS requirements to accept or reject
requests.
j. The request examiner manages resource allocation to prevent overloading and
ensure successful fulfillment of service requests.
k. It relies on real-time data from the VM Monitor (resource availability) and Service
Request Monitor (workload processing) for effective decision-making.
l. Resource requests are assigned to VMs, with entitlements determined for each
allocated VM.
m. The Pricing mechanism establishes how charges are applied to service requests,
considering submission times, pricing rates, and resource availability.
n. Pricing facilitates the management of resource supply and demand, enabling
prioritized allocations.
o. The Accounting mechanism tracks actual resource usage for cost calculation and
user billing, utilizing historical data for improved future decisions.
p. The VM Monitor tracks VM availability and resource entitlements, while the
Dispatcher initiates service request execution on allocated VMs.
q. The Service Request Monitor oversees the execution progress of service requests.
r. Flexibility is maximized by allowing multiple VMs to run on a single physical
machine, accommodating different resource configurations and operating systems.

Page 19
CLOUD COMPUTING & SECURITY (BIS613D)

Page 20
CLOUD COMPUTING & SECURITY (BIS613D)

Quality of Service Factors


a. The data center consists of multiple computing servers designed to meet service demand.
b. Cloud services are essential for critical business operations, requiring careful
consideration of Quality of Service (QoS) parameters like time, cost, reliability, and
security.
c. QoS requirements are not static; they evolve with changing business operations and
environments.
d. Emphasis is on customer satisfaction, as they pay for cloud services.
e. Current cloud computing lacks robust support for dynamic negotiation of Service Level
Agreements (SLAs) and automatic resource allocation.
f. Effective negotiation mechanisms are necessary for establishing SLAs and responding
to alternate offers.
g. Commercial cloud services should provide customer-driven management based on
client profiles and service needs.
h. Risk management strategies are essential to identify and manage risks related to service
delivery.
i. The cloud should implement market-based resource management strategies that support
both customer needs and SLA objectives.
j. Autonomic resource management models are required to adapt to shifting service
requirements and leverage Virtual Machine technology for dynamic resource allocation.

Virtualization Support and Disaster Recovery


a. Cloud computing relies on system virtualization and updated provisioning tools.
b. Virtualizing servers within a shared cluster enables the consolidation of web services.
c. Virtual Machines (VMs) serve as containers for cloud services, aiding in the
deployment of services on physical nodes.
d. Users are abstracted from the underlying physical resources involved in service
provision.
e. Application developers can focus solely on service logic, without concerning
themselves with issues such as scalability and fault tolerance, as these are managed
through virtualization.
f. Infrastructure for server virtualization in data centers is essential for implementing
specific cloud applications.

Page 21
CLOUD COMPUTING & SECURITY (BIS613D)

Hardware Virtualization
1. Cloud computing systems utilize virtualization software to simulate hardware, allowing
for the execution of unmodified operating systems.
2. This software is essential for running legacy applications and developing new cloud
applications, enabling developers to choose their preferred operating systems and
programming environments.
3. Virtualization software creates a consistent development and deployment environment,
reducing runtime issues.
4. System virtualization software acts as a hardware analog mechanism, facilitating the
operation of unmodified OS directly on bare hardware.
5. Virtual machines (VMs) on cloud platforms primarily host third-party applications and
offer flexible runtime services that relieve users from environmental concerns.
6. VMs provide individual users with full privileges while ensuring separation for security
and customization.
7. Multiple VMs can operate on a single physical server, with each capable of running
different operating systems.

Page 22
CLOUD COMPUTING & SECURITY (BIS613D)

8. A support structure including virtual disk storage and virtual networks essential for
VMs is established to form a resource pool.
9. Special servers, termed virtualizing integration managers, manage the virtualization
process and oversee loads, resources, security, and data provisioning.
10. Cloud services are centralized and managed through these integrated platforms,
enhancing overall operational efficiency.

Virtualization Support in Public Clouds


1. Three major public clouds are AWS, Microsoft Azure, and Google App Engine (GAE).
2. AWS offers high flexibility through Virtual Machines (VMs) for users to run custom
applications.
3. GAE provides limited application-level virtualization, focusing on services developed
by Google.
4. Microsoft Azure supports programming-level virtualization (.NET) for application
development.
5. VMware tools are applicable for workstations, servers, and virtual infrastructures.
6. Microsoft tools cater to PCs and certain specialized servers.
7. The XenEnterprise tool is exclusively designed for Xen-based servers.
8. The IT industry is increasingly oriented towards cloud computing.
9. Virtualization enhances high availability (HA), disaster recovery, dynamic load
balancing, and comprehensive provisioning support.
Page 23
CLOUD COMPUTING & SECURITY (BIS613D)

10. Both cloud and utility computing utilize virtualization to deliver scalable and
autonomous computing environments.

Storage Virtualization for Green Data Centers


1. IT power consumption in the United States has exceeded 3% of total energy usage,
more than doubling over time.
2. The proliferation of data centers significantly contributes to this energy crisis.
3. Over half of Fortune 500 companies are enacting new corporate energy policies.
4. Surveys from IDC and Gartner indicate that virtualization has substantially reduced
costs and power consumption in physical computing environments.
5. The IT industry is increasingly prioritizing energy awareness due to the urgent need for
power conservation.
6. Alternative energy resources have seen minimal evolution, highlighting the importance
of conserving power in all computing systems.
7. Virtualization and server consolidation have been effective strategies for reducing
energy consumption.
8. Green data centers and storage virtualization are viewed as further enhancements to
green computing initiatives.

Virtualization for IaaS


1. VM technology has become widely adopted, allowing for custom environments on
physical infrastructure in cloud computing.
2. Benefits of using VMs in cloud environments include:
3. System administrators can consolidate workloads from underutilized servers into
fewer servers.
4. VMs can run legacy code without disrupting other APIs.
5. Enhanced security through the creation of sandboxes for potentially unreliable
applications.
6. Performance isolation in virtualized platforms offers better service guarantees and
improved quality of service for customer applications.

VM Cloning for Disaster Recovery


1. VM technology necessitates a sophisticated disaster recovery strategy.
2. Two main recovery schemes:
i)Recovering one physical machine with another physical machine.
ii) Recovering one VM with another VM.
3. Traditional recovery between physical machines is slow, complex, and costly.
4. Recovery time for physical machines includes hardware configuration, OS installation,
configuring backup agents, and lengthy restart times.
5. Recovering VMs reduces installation and configuration times, resulting in disaster
recovery times approximately 40% shorter than recovery for physical machines.
6. Virtualization enhances swift disaster recovery through VM encapsulation.
7. Cloning VMs is an effective disaster recovery solution:
8. Create a clone VM on a remote server for each running VM on a local server.

Page 24
CLOUD COMPUTING & SECURITY (BIS613D)

9. Only one clone needs to be active; others can be suspended.


10. A cloud control center can activate suspended clones in case of original VM failure.
11. Live migration can occur rapidly with snapshot capabilities.
12. Updated data and modified states are sent to the suspended VM to sync.
13. Recovery Property Objective (RPO) and Recovery Time Objective (RTO) are
influenced by the number of snapshots taken.
14. VM security must be maintained during live migration.

Architectural Design Challenges


Challenge 1—Service Availability and Data Lock-in Problem
a. The management of a cloud service by a single company can create single
points of failure.
b. Utilizing multiple cloud providers can enhance high availability (HA) and
protect against failures, even if there are multiple data centers involved.
c. Distributed denial of service (DDoS) attacks pose a threat to SaaS providers
by rendering their services unavailable.
d. Some utility computing services allow defense against DDoS attacks
through quick scaling options.
e. While software stacks have improved interoperability, proprietary APIs
hinder easy data extraction and program mobility among different cloud
platforms.
f. Standardizing APIs would allow SaaS developers to deploy services and
data across various cloud providers, addressing data lock-in concerns and
reducing the risk of losing data due to a single company's failure.
g. API standardization facilitates a new usage model, enabling consistent
software infrastructure in both public and private clouds, and supporting “surge
computing” capabilities.
Challenge 2—Data Privacy and Security Concerns
a. Current cloud offerings predominantly utilize public networks, increasing
vulnerability to attacks.
b. Issues can be mitigated using established technologies like encrypted storage,
virtual LANs, and network middleboxes (e.g., firewalls).
c. Data should be encrypted prior to being stored in the cloud.
d. Many countries enforce laws mandating SaaS providers to store customer and
copyrighted data within their national borders.
e. Traditional network attacks include:
i)Buffer overflows
ii)DoS attacks
iii)Spyware
iv)Malware
v)Rootkits
vi)Trojan horses
vii)Worms
f. Cloud-specific attacks may arise from:
Page 25
CLOUD COMPUTING & SECURITY (BIS613D)

i)Hypervisor malware
ii)Guest hopping and hijacking
iii)VM rootkits
g. Man-in-the-middle attacks can occur during VM migrations.
h. Passive attacks aim to steal sensitive information, while active attacks
manipulate kernel data structures, potentially causing severe damage to cloud
servers.

Challenge 3—Unpredictable Performance and Bottlenecks


a. Cloud computing allows multiple VMs to share CPUs and memory, but I/O
sharing presents challenges.
b. Running 75 EC2 instances with the STREAM benchmark requires 1,355
MB/second mean bandwidth; however, writing 1 GB files from each instance
needs only 55 MB/second disk write bandwidth.
c. This discrepancy illustrates the issue of I/O interference among VMs.
d. Enhancing I/O architectures and operating systems could improve the
virtualization of interrupts and I/O channels.
e. As internet applications grow more data-intensive, distributed data across
cloud boundaries complicates data management.
f. Cloud users and providers must consider data placement and traffic to
reduce costs, as demonstrated by Amazon's CloudFront service development.
g. Addressing data transfer bottlenecks, expanding narrow links, and replacing
weak servers is essential.

Challenge 4—Distributed Storage and Widespread Software Bugs


1. The database in cloud applications is continuously expanding, necessitating a
storage system that can accommodate this growth while leveraging cloud
scalability.
2. There is a need for designing efficient distributed Storage Area Networks (SANs)
that meet programmers' requirements for scalability, data durability, and high
availability (HA).
3. A significant challenge in cloud computing is maintaining data consistency within
SAN-connected data centers.
4. Debugging large-scale distributed issues is complicated by the inability to
reproduce bugs, requiring debugging to occur at scale within production
environments.
5. No single data center can guarantee an ideal debugging environment; thus,
alternatives such as utilizing Virtual Machines (VMs) may offer a solution by
allowing for the capture of critical information.
6. Another potential solution involves using simulators for debugging, provided they
are well-designed.

Page 26
CLOUD COMPUTING & SECURITY (BIS613D)

Challenge 5—Cloud Scalability, Interoperability, and Standardization


1. The pay-as-you-go model applies to storage and network bandwidth, with
charges based on bytes used, while computation costs vary with virtualization
level.
2. Google App Engine (GAE) scales automatically according to load, charging
users based on cycle usage.
3. Amazon Web Services (AWS) charges hourly for virtual machine (VM)
instances, even when idle.
4. There is an opportunity for cost savings through rapid scaling in response to load
variations, ensuring compliance with SLAs.
5. The Open Virtualization Format (OVF) provides a secure and portable
framework for packaging and distributing VMs, independent of specific
platforms or operating systems.
6. OVF supports virtual appliances that can encompass multiple VMs and defines
a transport mechanism for VM templates across various virtualization
platforms.
7. Standardization is necessary to allow virtual appliances to operate on diverse
platforms and enable hypervisor-agnostic VMs.
8. Cross-platform live migration is essential for compatibility between x86 Intel
and AMD technologies, along with support for legacy hardware for load
balancing.

Challenge 6—Software Licensing and Reputation Sharing


1. Many cloud computing providers initially favored open-source software due to its
more suitable licensing model for utility computing.
2. There are opportunities for open source to maintain its popularity or for commercial
software companies to adapt their licensing structures to cloud computing needs,
including pay-for-use and bulk-use options.
3. Customer misconduct can harm the entire cloud's reputation, exemplified by spam-
prevention services blacklisting EC2 IP addresses, which can hinder VM
installation processes.
4. A potential solution is the development of reputation-guarding services akin to
"trusted e-mail" offerings for smaller ISPs, available for a fee.
5. Legal issues surrounding the transfer of liability between cloud providers and
customers need to be addressed, ideally through Service Level Agreements (SLAs).

Session 21 questions:
1. What is the primary goal of a market-oriented cloud architecture?
2. What is the role of a Service Level Agreement (SLA) in cloud computing?
3. Name two mechanisms used for pricing and billing in cloud computing.
4. How does virtualization enhance disaster recovery in cloud environments?
5. What are the three major public cloud providers?

Page 27
CLOUD COMPUTING & SECURITY (BIS613D)

Question Bank
1. Explain Public Cloud?
2. Explain Private Cloud?
3. Explain Hybrid Cloud?
4. With a neat figure explain private, public and hybrid cloud.
5. Discuss data center networking for the cloud with a neat figure.
6. What is Paas,Saas,Iaas?
7. Explain the Cloud Services.
8. Explain six design cloud objectives?
9. Data-center interconnection networks?
10. Explain Modular Data center in shipping container?
11. Explain Interconnection of Modular Data center?
12. Expalin data center management issues?
13. Generic Cloud Architecture?
14. Explain layered cloud architectural development?
15. Explain Virtualization support and Disaster Recovery?
16. Explain Architectural Design Challenges?

Page 28

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy