Module 3 Notes
Module 3 Notes
Module-03
Over the past 20 years, the global economy has shifted from manufacturing to service
.IN
industries.
By 2010, 80% of the U.S. economy was service-based, with only 15% in manufacturing
and 5% in agriculture and other sectors.
C
Cloud computing is especially beneficial for the service industry and brings a new way of
N
doing business computing.
SY
In 2009, the global cloud market was worth $17.4 billion; it was predicted to grow to
$44.2 billion by 2013.
Cloud application developers rent resources from large automated data centers instead
U
These platforms are usually built on top of large data centers using virtualization.
Cloud computing turns data centers into virtual systems with automated management
of hardware, databases, interfaces, and applications.
The goal of cloud computing is to improve data centers using automation and efficient
resource use.
Cloud computing came from older systems like cluster, grid, and utility computing.
Cluster and grid computing use many computers working together.
Utility computing and SaaS let you pay only for what you use.
Cloud computing gives services using big data centers.
People can use cloud services from anywhere, anytime.
Dept. of CSE, SVIT Page 1
Instead of moving data around, cloud sends programs to the data.
This saves time and improves internet speed.
Virtualization helps use resources better and cuts costs.
Companies don’t need to set up or manage servers themselves.
Cloud provides hardware, software, and data only when needed.
The goal is to replace desktop computing with online services.
Cloud can run many different apps at the same time easily.
.IN
4.1.1.1 Centralized versus Distributed Computing
People may worry about using clouds in other countries unless strong agreements (SLAs)
are made.
A private cloud is built and used within one organization (not public).
It is owned and managed by the company itself.
Only the organization and its partners can access it — not the general public.
It does not sell services over the Internet like public clouds do.
Private clouds give flexible, secure, and customized services to internal users.
They allow the company to keep more control over data and systems.
Private clouds may affect cloud standard rules, but offer better customization for the
company.
.IN
C
N
SY
U
VT
The core of a cloud is a server cluster made of many virtual machines (VMs).
Compute nodes do the work; control nodes manage and monitor cloud tasks.
Gateway nodes connect users to the cloud and handle security.
Clouds create virtual clusters for users and assign jobs to them.
Unlike old systems, clouds handle changing workloads by adding or removing resources
as needed.
Private clouds can support this flexibility if well designed.
Supercomputers use separate storage and custom networks (like fat trees or 3D
torus).
Data centers use server disks, memory, and standard IP networks (like 10 Gbps
Ethernet).
.IN
Layer 3: access and border routers that link to the Internet.
C
N
SY
U
VT
Private clouds will grow faster than public clouds in the future.
Private clouds are more secure and trustworthy for companies.
Dept. of CSE, SVIT Page 4
Private clouds may become public clouds or hybrids as they mature.
Hybrid clouds (mix of private and public) will be common in the future.
Applications (like email) use service-access nodes to connect to internal cloud
services.
Supporting service nodes help manage cloud tasks (like locking services).
Independent service nodes provide specific services, like geographical data.
Clouds improve network efficiency by reducing data movement.
Clouds help solve the petascale I/O problem (handling large data).
Cloud performance and Quality of Service (QoS) are still being tested in real
.IN
use.
C
4.1.2 Cloud Ecosystem and Enabling Technologies
N
Cloud computing: Rent resources, only pay for what you use.
Cloud saves 80-95% compared to traditional computing.
U
Shift from desktops to data centers: Move computing, storage, and software from local
devices to data centers over the Internet.
Service provisioning and economics: Cloud providers offer services with SLAs,
focusing on efficiency and pay-as-you-go pricing.
Scalability: Cloud platforms must be able to scale as the number of users grows.
Data privacy protection: Ensure data privacy to build trust in cloud services.
High quality of services: Standardize Quality of Service (QoS) to ensure
interoperability between different cloud providers.
Traditional IT costs:
Users buy hardware and face fixed costs and operational costs (e.g., maintenance).
Costs increase with more users.
.IN
Pay-as-you-go model: No upfront cost, only pay for operational expenses.
Cloud computing is cheaper because it avoids large initial investments and scales with
demand.
C
Cloud computing is great for small businesses:
N
No need to buy expensive equipment, just pay for what you use.
Ideal for businesses that need flexibility and want to avoid heavy capital expenses.
SY
Cloud Ecosystems:
Private and hybrid clouds are growing, offering flexible resources with public cloud
involvement.
VT
Cloud tools:
VM management tools like vSphere, oVirt, and OpenNebula help manage VMs and
virtualized resources.
Public cloud tools like Amazon EC2 and Eucalyptus support cloud infrastructure.
.IN
4.1.3 Infrastructure-as-a-Service (IaaS)
IaaS means renting IT infrastructure like servers, storage, and networks over the
C
Internet.
It provides virtual machines, storage, networks, and firewalls.
N
Users can choose their own operating system and software.
Users don’t manage physical hardware, only virtual resources.
SY
Examples of IaaS providers:
VT
o Amazon EC2, S3
o Microsoft Azure VMs
o Google Compute Engine
o IBM Cloud
o Oracle Cloud Infrastructure (OCI)
.IN
Microsoft Azure App Service
Heroku
IBM Cloud Foundry
Red Hat OpenShift
C
Pay only for what you use – no upfront setup or hardware costs.
N
Helps teams collaborate easily and launch apps faster.
SY
Gmail
Google Docs
Microsoft 365
Salesforce
Zoom
Dept. of CSE, SVIT Page 8
.IN
C
N
SY
A data center is a facility that houses a large number of servers connected through a high-
U
speed network to provide computing, storage, and service resources. A modular data center is
a smaller, portable version that fits inside a 40-foot container and can be easily deployed and
VT
expanded, offering flexibility and scalability for remote or rapid deployment needs.
Large Data Centers: Cloud computing uses huge data centers, sometimes as big as
shopping malls, with hundreds of thousands of servers.
Lower Costs for Bigger Centers: Bigger data centers are cheaper to run per server due
to economies of scale.
Cost Breakdown: Operating a large data center is cheaper, with lower network and
storage costs compared to small ones.
Microsoft's Global Data Centers: Microsoft runs around 100 data centers worldwide to
support cloud services.
Data Center Components: Data centers use off-the-shelf components, such as servers
with multi-core CPUs, DRAM, and disk drives. Servers are connected via switches for
resource access.
Bandwidth and Latency: Disk bandwidth varies between local and off-rack storage.
Large data centers often face challenges due to discrepancies in latency, bandwidth, and
capacity.
Failure and Reliability: Hardware and software failures are common in large data
.IN
centers. Redundant hardware and data replication ensure reliability, preventing data
loss during failures.
Cooling System: Data centers use raised floors to distribute cool air to server racks.
C
Cold air is pumped into the floor and escapes through perforated tiles in front of racks.
N
Hot air is then returned to the cooling units for recirculation. Advanced systems may
also use cooling towers for efficiency.
SY
U
VT
The interconnection network in a data center connects all servers in the cluster and is a vital component for
performance.
.IN
The network must handle both point-to-point (direct server-to-server) and collective communication (group
communication) among servers.
Most data centers use standard, off-the-shelf components (not custom-built supercomputing parts).
Each server has:
The total disk storage is about 10 million times larger than the memory (DRAM).
Big applications must handle differences in speed and size between memory and storage.
Large-scale data centers are more cost-efficient, but...
Failures are common (about 1% of servers can fail at any time):
.IN
Software bugs
Data centers use raised floors (2–4 feet above concrete) to hide cables and distribute cool air.
U
Cold air is blown under the floor by CRAC units (Computer Room Air Conditioning).
Cold air comes out through perforated tiles in front of server racks.
VT
A critical part of data center design is the interconnection network that links all servers within the cluster. This
network must be carefully designed to meet five key requirements: low latency for fast communication, high
bandwidth to handle large volumes of data, low cost to maintain affordability, support for the Message Passing
Interface (MPI) used in parallel processing, and fault tolerance to ensure the system keeps running even when some
components fail. The inter-server network must efficiently handle both point-to-point communication between
Dept. of CSE, SVIT Page 12
individual servers and collective communication among multiple servers at once. Meeting these needs is essential
for the smooth and reliable operation of large-scale data centers.
The network must support all MPI communication types (point-to-point and collective).
It should have high bisection bandwidth to handle large data flow across the network.
One-to-many communication is needed for tasks like distributed file access.
Metadata master servers must talk to many slave servers.
The network should support MapReduce operations at high speed.
It must handle different types of network traffic used by various applications.
Fat-tree and crossbar networks can be built using low-cost Ethernet switches.
As server numbers grow, network design becomes more complex.
Modular growth is important—server containers are used as building blocks.
Each data-center container can hold hundreds to thousands of servers.
U
Containers are pre-built units—just connect power, network, and cooling to start.
This reduces setup and maintenance costs.
VT
.IN
1. Switch-centric: Uses switches to connect servers, no changes needed on the servers themselves.
2. Server-centric: Modifies the operating system on servers with special drivers to handle traffic.
C
Fat-tree network design is an example used in data centers:
N
Two layers:
SY
Pods: Each pod contains edge switches, aggregation switches, and their connected nodes.
U
Switch failure doesn't affect the entire network (alternate paths exist).
Edge switch failure affects only a small number of servers.
Higher bandwidth within pods supports massive data movement for cloud applications.
Low-cost Ethernet switches are used, making the design more affordable.
Routing algorithms inside switches help find alternate paths if there’s a failure.
Server nodes aren't affected by switch failures as long as alternate paths are intact.
Modern data centers are often made up of truck-towed containers that house server clusters.
SGI ICE Cube modular data centers have containers that hold hundreds of blade servers in racks, with
fans circulating heated air through a heat exchanger to cool it for the next rack.
A single container can hold 46,080 processing cores or 30 PB of storage.
Cooling costs can be reduced by 80% compared to traditional data centers through efficient chilled air and
cold water circulation.
These data centers are often built in locations with cheaper utilities and more efficient cooling.
Modular containers can form a large-scale data center, like a shipping yard of containers.
Centralized management (in a single building) is important for handling data integrity, server
.IN
monitoring, and security management.
Data center construction evolves in stages: starting with a single server, moving to a rack system, and
finally to a container system.
U
Building a rack of 40 servers might take half a day, but expanding to a container system with 1,000
servers requires more time for layout, power, networking, cooling, and testing.
VT
Container-based data-center modules can be combined to build large-scale data centers using multiple
container modules.
C
One example of such a network design is the server-centric BCube network for modular data centers.
The BCube network uses a layered structure:
N
Level 0 consists of server nodes.
SY
In BCube1:
VT
The BCube network provides multiple paths between any two nodes, offering:
Routing support via a kernel module in the server’s OS, allowing packet forwarding without modifying
upper layer cloud applications.
This design allows cloud applications to run on top of the BCube network without requiring changes.
BCube networks are used inside server containers in modular data centers.
However, to connect multiple containers, a new level of networking is required, leading to the design of
MDCube (Modularized Datacenter Cube).
.IN
C
Quality Service for Users: The data center should ensure quality service for users for at least 30 years.
Controlled Information Flow: The system should streamline information flow and focus on high
N
availability and consistent services.
SY
Multiuser Manageability: The system must handle all data center functions, such as traffic flow, database
updates, and server maintenance.
Scalability: The system should grow with increasing workload, with scalable storage, processing, I/O,
U
Reliability in Virtualized Infrastructure: Features like failover, fault tolerance, and VM live migration
should be integrated to ensure recovery from failures or disasters.
Low Cost: Minimize costs for both users and providers, including operational expenses.
Security and Data Protection: Strong security measures should be in place to protect against attacks,
ensure data privacy, and maintain integrity.
Green Technology: Data centers should focus on energy efficiency and reducing power consumption.
Factory Racking and Packing: Data centers should be built efficiently in factories, avoiding complex
packaging layers at the customer site.
Custom-Crafted vs. Prefabricated: While modular designs are more space-efficient, data centers are still
typically custom-crafted rather than being prefab units.
High Power Density: Modular data centers can support high power densities, exceeding 1250 W/sq ft.
Dept. of CSE, SVIT Page 17
Flexible Installation Locations: These data centers can be installed on rooftops or parking lots.
Future Upgrades: It's important to include enough redundancy and flexibility in the design to allow for
future upgrades as needs grow.
An Internet cloud is envisioned as a public cluster of servers provisioned on demand to perform collective web
services or distributed applications using data-center resources.
.IN
4.3.1.1 Cloud Platform Design Goals
Scalability: The system should easily expand by adding more servers and network capacity as needed,
C
supporting growing workloads and user demands.
Virtualization: Cloud management must support both physical and virtual machines, allowing flexible
N
resource allocation and efficient operation.
SY
Efficiency: The platform should be optimized for performance and resource use, ensuring cost-effectiveness
and smooth operations.
Reliability: Data should be replicated across multiple locations, so even if one data center fails, the data
U
1. Ubiquitous Networking: Widespread broadband and wireless networking allow seamless access to cloud
services.
2. Falling Storage Costs: Decreasing prices for storage make it more affordable to build and maintain large-
scale data centers.
3. Improvements in Internet Software: Advancements in software have enhanced cloud capabilities, making
services more reliable and efficient.
1. Hardware Advancements: Improvements in multicore CPUs, memory chips, and disk arrays allow for
faster and more powerful data centers with vast storage capacity.
.IN
2. Resource Virtualization: Virtualization enables quick cloud deployment and helps with disaster recovery.
3. Service-Oriented Architecture (SOA): Supports flexible cloud service design and integration.
4. Software as a Service (SaaS) & Web 2.0 Standards: These have enabled cloud applications and services
C
to be easily accessible and scalable.
N
5. Internet Performance: Better network infrastructure ensures fast and reliable cloud access.
6. Large-Scale Distributed Storage: A foundational technology for handling vast amounts of data across
SY
cloud environments.
7. License Management and Billing: Advances in managing licenses and automating billing help streamline
cloud service operations.
U
The cloud is made up of many servers that are added or removed as needed.
These servers can either be physical machines or virtual ones (VMs).
Key Components:
The software in the cloud automatically handles resources, like adding or removing servers.
Big companies like Google and Microsoft have data centers around the world to make their clouds work
efficiently.
Cloud Types:
.IN
Security Concerns:
Trust & Reputation: Systems that ensure resources are safe and reliable.
C
Security Monitoring: Keeps the cloud safe from attacks and breaches.
Privacy Issues: Protecting the data and ensuring only authorized access.
N
SY
U
VT
.IN
Middle layer built on top of infrastructure.
Offers tools for developing, testing, and running apps.
Ensures scalability, security, and reliability.
C
Acts like a software environment for developers.
SY
.IN
QoS requirements are not fixed and can change over time.
Cloud systems must prioritize customer needs since customers pay for the services.
C
Current cloud systems lack strong support for dynamic SLA negotiation.
SLA negotiation mechanisms are needed to handle changing user demands and alternate offers.
N
Clouds must support customer-driven service management based on individual profiles and needs.
SY
Computational risk management helps identify and manage risks in service execution.
Resource management should use market-based strategies to balance user needs and system efficiency.
Autonomic resource management allows the system to adapt to changes without manual intervention.
U
One very distinguishing feature of cloud computing infrastructure is the use of system virtualization and the
modification to provisioning tools. Virtualization of servers on a shared cluster can consolidate web services. As the
VMs are the containers of cloud services, the provisioning tools will first find the corresponding physical machines
and deploy the VMs to those nodes before scheduling the service to run on the virtual nodes.
In addition, in cloud computing, virtualization also means the resources and fundamental infrastructure are
virtualized. The user will not care about the computing resources that are used for providing the services. Cloud
users do not need to know and have no way to discover physical resources that are involved while processing a
service request. Also, application developers do not care about some infrastructure issues such as scalability and
fault tolerance (i.e., they are virtualized). Application developers focus on service logic.
.IN
C
N
SY
U
VT
.IN
Storage models vary—AWS has block and blob stores, Azure uses SQL Data Services, GAE uses BigTable.
Network configurations are mostly automated and hidden from users, with scaling managed internally.
C
4.3.3.2 Virtualization Support in Public Clouds
N
AWS offers full VM-level virtualization, giving users high flexibility to run custom applications.
SY
GAE (Google App Engine) offers limited, application-level virtualization, restricting users to Google’s
predefined services.
Microsoft Azure provides programming-level virtualization through the .NET framework.
U
Microsoft virtualization tools are designed for PCs and some specific servers.
XenEnterprise tools are used for Xen-based server virtualization only.
The IT industry is widely adopting cloud computing due to its benefits.
Virtualization supports high availability (HA), disaster recovery, dynamic load balancing, and
resource provisioning.
Cloud computing and utility computing both rely on virtualization for scalability and automation.
IT power consumption in the U.S. has more than doubled, now using 3% of the nation’s total energy.
A major cause is the high number of power-hungry data centers.
Over half of Fortune 500 companies are adopting new energy-saving policies.
Virtualization significantly reduces energy costs by lowering physical server usage.
Surveys by IDC and Gartner confirm virtualization’s role in cutting power consumption.
Dept. of CSE, SVIT Page 25
The IT industry is becoming more energy-conscious due to rising power concerns.
There is a growing need to save energy because alternative energy options are limited.
Virtualization and server consolidation help reduce the number of physical machines needed.
Green data centers aim to use energy-efficient infrastructure.
Storage virtualization adds to energy savings by optimizing storage usage.
Together, these efforts support the goal of green computing.
.IN
Underutilized servers can be consolidated into fewer machines, saving resources.
VMs can run legacy code without affecting other system interfaces or APIs.
C
VMs improve security by using sandboxes to isolate risky applications.
VMs support performance isolation, enabling better QoS guarantees for customers.
N
SY
60%.
VT
.IN
Standard APIs support both public and private cloud usage (hybrid model).
This enables "surge computing" — using public cloud resources when private cloud capacity is exceeded.
C
4.3.4.2 Challenge 2—Data Privacy and Security Concerns
N
Public cloud networks are more exposed to attacks than private networks.
SY
Existing technologies like encrypted storage, VLANs, and firewalls can help secure cloud systems.
Encrypting data before storing it in the cloud adds protection.
Some laws require SaaS providers to store data within national borders.
U
Traditional network attacks include DoS, buffer overflows, spyware, and malware.
Cloud-specific threats include hypervisor malware, guest VM hopping, and VM rootkits.
VT
Multiple VMs can share CPU and memory easily, but sharing I/O causes performance issues.
Example: 75 EC2 instances show good memory bandwidth (1,355 MB/s) but poor disk write bandwidth (55
MB/s).
I/O interference occurs when VMs compete for disk access.
A solution is to improve I/O architecture and OS support for virtualizing interrupts and I/O channels.
Internet applications are increasingly data-intensive and distributed across cloud boundaries.
This complicates data placement and transfer, increasing cost and latency.
Cloud application databases are constantly growing and need scalable storage solutions.
The goal is to design distributed SANs that scale up or down on demand.
Data centers must support scalability, durability, and high availability (HA).
.IN
Ensuring data consistency in SAN-connected cloud data centers is a significant challenge.
Large-scale distributed bugs are hard to reproduce, requiring debugging in live production environments.
Data centers typically do not allow in-production debugging.
C
Virtual machines (VMs) can help capture valuable debugging information.
Well-designed simulators offer another method for debugging distributed systems.
N
SY
Storage and network bandwidth are billed based on the number of bytes used.
U
Computation charges vary by virtualization level; GAE charges by CPU cycles and auto-scales with load,
while AWS charges hourly per VM instance, even if idle.
VT
There is an opportunity to scale resources up and down rapidly based on load variation to save costs without
violating SLAs.
Open Virtualization Format (OVF) provides a secure, portable, and platform-independent way to package
and distribute VMs.
OVF supports packaging software to run on any virtualization platform, regardless of host or guest OS.
It includes transport mechanisms for VM templates and supports multi-VM virtual appliances.
Research is needed to create hypervisor-agnostic VMs and enable live migration between x86 Intel and
AMD systems.
Further efforts are required to support legacy hardware and achieve effective load balancing across
heterogeneous platforms.
Many cloud providers initially use open source software due to commercial software licensing not fitting
utility computing well.
There is an opportunity for open source to stay popular or for commercial vendors to adapt licensing models
for cloud use.
Combining pay-for-use and bulk-use licensing schemes can broaden business reach.
One customer's bad behavior can damage the cloud's reputation, such as EC2 IP blacklisting by spam-
prevention services affecting VM installation.
.IN
Creating reputation-guarding services, similar to trusted email services, could protect cloud providers and
users.
Legal liability transfer between cloud providers and customers is a challenge that must be addressed in
C
SLAs.
N
4.4 PUBLIC CLOUD PLATFORMS: GAE, AWS, AND AZURE
SY
Cloud services are requested by IT admins, software vendors, and end users.
Five levels of cloud players exist, with individual and organizational users at the top level demanding
VT
different services.
SaaS application providers mainly serve individual users.
IaaS and PaaS providers primarily serve business organizations.
IaaS offers compute, storage, and communication resources to both applications and organizations.
PaaS providers define the cloud environment and support infrastructure services and organizational users.
Cloud services depend on advances in virtualization, SOA, grid management, and power efficiency.
Consumers buy cloud services as IaaS, PaaS, or SaaS.
Many entrepreneurs offer value-added utility services to large user bases.
The cloud industry grows as enterprises outsource computing and storage to professional providers.
Provider service charges are usually much lower than the cost of frequent server replacements.
Table 4.5 summarizes profiles of five major cloud providers as of 2010.
This led to innovations in data-center design and scalable programming models like MapReduce.
Google operates hundreds of data centers worldwide with over 460,000 servers.
Around 200 data centers run cloud applications simultaneously.
U
Data stored includes text, images, and videos, all replicated for fault tolerance.
VT
Google’s App Engine (GAE) is a PaaS platform for various cloud and web applications.
Google has led cloud development by using many data centers worldwide.
Popular cloud services from Google include Gmail, Google Docs, and Google Earth, all supporting many users
with high availability.
Google’s major technologies include Google File System (GFS), MapReduce, BigTable, and Chubby.
In 2008, Google introduced Google App Engine (GAE), a platform for scalable web applications used by many
smaller cloud providers.
GAE runs applications on Google’s extensive data center network linked to its search infrastructure.
Google’s data centers have thousands of servers organized into clusters running these services.
GAE’s frontend is an application framework like ASP or J2EE, supporting Python and Java.
.IN
The Google cloud platform’s main components include Google File System (GFS) for large data storage,
MapReduce for application development, Chubby for distributed lock services, and BigTable for structured
C
data storage.
These technologies are used together inside Google data centers, which contain thousands of servers
N
organized into clusters.
SY
Users access Google applications through web interfaces; third-party developers can build cloud apps using
Google App Engine (GAE).
Google’s core infrastructure is private and not open for external service building.
U
GAE runs third-party applications on Google’s infrastructure, removing the need for developers to manage
VT
servers.
GAE combines several software components, with a frontend framework similar to ASP, J2EE, or JSP.
GAE supports Python and Java environments, functioning like web application containers and providing full
web technology support.
Popular Google apps like Search, Docs, Earth, and Gmail run on GAE and support many users at once.
Users access these apps through web browsers.
Third-party developers can use GAE to build their own cloud apps.
These apps run on thousands of servers inside Google’s data centers.
GAE offers storage services for apps to save data securely and perform database-like operations (queries,
sorting, transactions).
Amazon AWS offers public cloud services mainly using the Infrastructure-as-a-Service (IaaS) model.
EC2 provides virtual machines (VMs) where cloud apps run.
S3 is Amazon’s object storage service for storing data like files.
EBS offers block storage that works like a hard drive for traditional apps.
.IN
SQS (Simple Queue Service) ensures reliable message delivery between processes, even if one is offline.
C
N
SY
U
VT
Users access AWS services via standard protocols like SOAP through browsers or client programs.
AWS offers many services across different application areas (12 tracks summarized in a table).
AWS also provides SQS and SNS for messaging and notifications.
ELB (Elastic Load Balancer) distributes incoming app traffic across EC2 instances to balance load and
avoid failing servers.
.IN
C
Microsoft launched Windows Azure in 2008 as a cloud platform built on Windows OS and Microsoft
virtualization.
N
Azure runs applications on virtual machines (VMs) hosted in Microsoft data centers.
SY
Azure manages all data center resources: servers, storage, and networks.
The platform has three main components and provides various cloud-level services:
Live Service: lets users access Microsoft Live apps and work on data across multiple machines.
U
.NET Service: supports app development locally and execution on the cloud.
VT
SQL Azure: cloud-based relational database service using Microsoft SQL Server.
SharePoint Service: platform to build scalable business web applications.
Dynamic CRM Service: platform to build and manage customer relationship management (CRM) apps for
finance, marketing, sales, etc.
Azure services integrate well with other Microsoft apps like Windows Live, Office Live, Exchange Online,
SharePoint Online, and Dynamic CRM Online.
It uses standard web protocols like SOAP and REST for communication.
Users can integrate Azure cloud apps with other platforms and third-party clouds.
The Azure SDK allows developers to build and test Azure apps locally on Windows before deploying to the
cloud.
Layer dependency:
SaaS depends on PaaS, which depends on IaaS, which depends on the lower physical layers. You cannot
run SaaS without having the underlying infrastructure in place.
Software vendors care most about application performance on the cloud platform.
Providers focus on the cloud infrastructure itself.
.IN
C
N
SY
U
VT
.IN
Cloud systems must be fast, always available, and handle failures well.
Cloud platforms run on physical servers or virtual machines (VMs).
C
VMs make cloud platforms flexible and not tied to specific hardware.
The bottom layer stores huge amounts of data like a file system.
N
Above storage are layers for databases, programming, and data queries.
SY
Each layer talks to the one above to help build cloud applications.
Cloud clusters use cluster monitoring to check the status of all nodes.
VT