0% found this document useful (0 votes)
7 views

Cloud Computing Module1.Pptx

The document outlines a course on Cloud Computing, detailing course outcomes, modules, and laboratory experiments. It covers topics such as cloud architecture, virtualization, security, containerization, and various service models (IaaS, PaaS, SaaS). Additionally, it includes case studies and references for further reading on cloud computing principles and practices.

Uploaded by

rindhiyaasathish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Cloud Computing Module1.Pptx

The document outlines a course on Cloud Computing, detailing course outcomes, modules, and laboratory experiments. It covers topics such as cloud architecture, virtualization, security, containerization, and various service models (IaaS, PaaS, SaaS). Additionally, it includes case studies and references for further reading on cloud computing principles and practices.

Uploaded by

rindhiyaasathish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 97

CLOUD

COMPUTING
20CS215
COURSE OUTCOMES

Show the progression of cloud computing from virtualization to


CO1: PO6, PO7, PO12
containerization.

CO2: Understand
security.
cloud computing architecture, virtualization and cloud PO6, PO7, PO12

CO3: Construct SLA compliance for cloud computing PO6, PO7, PO12

Compare docker and kubernetes for cloud containerization and


CO4:
workload management. PO6, PO7, PO12
SYLLABUS
MODULE 1 - OVERVIEW OF COMPUTING PARADIGM
Recent trends in Computing - Grid Computing, Cluster Computing, Distributed Computing, Utility
Computing, Cloud Computing, Evolution of cloud computing- Cloud Computing (NIST Model) -
Properties and Characteristics of Cloud. Cloud Computing Architecture - Cloud computing stack- Service
Models (XaaS): Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a
Service(SaaS) Deployment Models: Public cloud, Private cloud, Hybrid cloud. Data Center Architecture,
SLA Management in Cloud Computing. Case Study - IBM Cloud
SYLLABUS
MODULE 2 - VIRTUAL MACHINES ANDVIRTUALIZATION
Implementation Levels of Virtualization - Virtualization Structures/Tools and Mechanisms - Virtualization
of CPU, Memory, and I/O Devices - Virtual Clusters and Resource Management - Virtualization for
Data-Center Automation. Case Study – AWS
MODULE 3 - CLOUD SECURITY
Cloud Security Risks, Trust, Operating System Security, VM Security, Security of Virtualization, Security
Risks Posted by Shared Images, Security Risks Posted by Management OS, Data privacy and security
Issues, Identity & Access Management, Access Control, Authentication in cloud computing. Case Study -
Microsoft Azure, GCP
SYLLABUS
MODULE 4 - CONTAINERIZATION AND ORCHESTRATION
Docker and Container Essentials - Working with Docker data - Understanding Docker Networking -
Deploying Kubernetes using KinD - Kubernetes Bootcamp.
BOOKS – STUDY MATERIALS
TEXT BOOKS
1. Rajkumar Buyya, James Broberg,Andrzej M. Goscinski,"Cloud Computing: Principlesand Paradigms",Wiley, 2013.
2. Kai Hwang, Jack Dongarra, Geoffrey C. Fox, "Distributed And Cloud Computing", 1st Edition, Morgan
Kaufmann, 2013.
3. Dan C Marinescu, "Cloud Computing, Theory and Practice", 2e, MK Elsevier, 2017.
4. Scott Surovich, Marc Boorshtein, "Kubernetes and Docker - An Enterprise Guide", November 2020, Packt Publishing
Ltd.
REFERENCES
1. Cloud Computing Bible, Barrie Sosinsky, Wiley-India, 2010
2. George Reese, Cloud Application Architectures: Building Applications and Infrastructure in the Cloud, O' Reilly
Publication.
3. John Rhoton, Cloud Computing Explained: Implementation Handbook for Enterprises, Recursive Press.
4. Toby Velte, Anthony Velte, Cloud Computing: A Practical Approach, McGraw-Hill Osborne Media, 2010
20CS281 - CLOUD COMPUTING LABORATORY

CO1: Understand cloud computing architecture, virtualization and cloud security. PO6, PO7, PO12

CO2: Construct SLA compliance for cloud computing PO6, PO7, PO12

Experiment with docker and kubernetes for cloud containerization and


CO3:
workload management PO6, PO7, PO12
LABORATORY EXPERIMENTS

1. Develop a Website and Deploy in Cloud.


2. Create a Virtual Machine and check whether it holds the data even after the release of the Virtual Machine.
3. Install a Compiler in the Virtual Machine and execute a sample program.
4. Installation of Docker Engine and Compose
5. Writing Docker file for simple application development.
6. Push and pull from/to Docker Hub.
7. Running multiple Docker containers using docker compose.
8. Experiments on Docker Swarm.
9. Development of simple experiments using minikube.
MODULE 1
OVERVIEW OF COMPUTING PARADIGM

Recent trends in Computing - Grid Computing, Cluster Computing, Distributed Computing, Utility
Computing, Cloud Computing, Evolution of cloud computing- Cloud Computing (NIST Model) - Properties
and Characteristics of Cloud. Cloud Computing Architecture - Cloud computing stack- Service Models
(XaaS): Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service(SaaS)
Deployment Models: Public cloud, Private cloud, Hybrid cloud. Data Center Architecture, SLA
Management in Cloud Computing. Case Study - IBM Cloud
EVOLUTION OF CLOUD COMPUTING
https://youtu.be/Bkx8
Egjm2mw
Distributed Computing
✔different parts of a program run simultaneously on two or more computers that are communicating with
each other over a network.
✔refers to the processing in which different parts of a program run concurrently on two or more processors
that are part of the same computer. Both types of processing require that a program be
segmented—divided into segments that can run concurrently.
✔It comprises of a set of processes that cooperate to achieve a common specific goal.
✔ Mostly social network sites are implemented by using the concept of distributed computing systems.
These are running in centrally controlled data centers..
✔One of the major requirements of distributed computing is a set of standards that specify how objects
communicate with each other. There are two chief distributed computing standards: CORBA and DCOM.
Cluster Computing
❖ a type of parallel or distributed processing system, which consists of a collection of interconnected stand-alone
computers working together as a single integrated computing resource.

❖The components of a cluster are commonly, but not always, linked to each other through fast local area networks .

❖A computer node will be one or multiprocessor system (PCs, workstations, or SMPs) with memory, I/O facilities,
and an operating system.

❖A cluster generally refers to two or additional computers (nodes) connected along.

❖The nodes can exist in a single cabinet or be physically separated and connected via a LAN.

❖An inter-connected (LAN-based) cluster of computers will seem as a single system to users and applications. Such
a system can provide a cost effective way to gain features and benefits (fast and reliable services) that have
traditionally been found solely on more expensive proprietary shared memory systems.
❖Types :

1 .High performance (HP) clusters

2. Load-balancing clusters

3. High Availability (HA) Clusters


❖Components of cluster computers: SMPs

- Multiple High Performance Computers (PCs,


Workstations, or SMPs)

- State-of-the-art Operating Systems (Layered or


Micro-kernel based)

- High Performance Networks/Switches (such as


Gigabit Ethernet and Myrinet)

- Network Interface Cards (NICs)

- Fast Communication Protocols and Services (such as


Active and Fast Messages)

- Cluster Middleware (Single System Image (SSI) and


System Availability Infrastructure)
Application
•Aerodynamics, astrophysics and data mining.
•Weather forecasting.
•Image Rendering.
•Various e-commerce applications.
•Earthquake Simulation.
•Petroleum reservoir simulation.
Grid computing
✔ distributed architecture that combines computer resources from various domains to reach a main objective.

✔Grid computing enables aggregation of distributed resources and transparently access to them.

✔In grid computing, the computers run independent tasks and are loosely linked by the Internet can work on a task
together, thus functioning as a supercomputer.

✔a grid works on various tasks among a network, but it is additionally capable of performing on specialized
applications. It is designed to resolve issues that are too big for a supercomputer while maintaining the flexibility
to process various smaller problems.

✔Computing grids deliver a multiuser infrastructure that accommodates the discontinuous demands of large
information processing.

✔ A grid is connected by parallel nodes that form a computer cluster - runs on an operating system like Linux or free
software - can differ in size from a small work station to numerous networks.
17
❖Grids have a variety of resources based on diverse software and hardware structures, computer
languages, and frameworks, either in a network or by using open standards with specific
guidelines to achieve a common goal.

❖Applications
- mathematical, scientific or educational tasks through several computing resources.

- used in structural analysis, Web services such as ATM banking, back office infrastructures, and
scientific or marketing research.

- a broad range of scientific applications, such as climate modeling, drug design, and protein
analysis.

-Grid computing is made up of applications used for computational computer problems that
are linked in a parallel networking environment. It connects each PC and computation intensive.
Utility Computing
Grid computing, cloud computing and managed IT services are based on the concept of utility computing. It
is a subset of cloud computing, allowing users to scale up and down based on their needs
process of providing computing service through an on-demand, pay-per use billing method.
is a computing business model
-provider owns, operates and manages the computing resources, infrastructure
-subscribers accesses it as and when required on a rental or metered basis
envisions some form of virtualization - the amount of storage or computing power available is
considerably larger than that of a single time-sharing computer - multiple servers are used on the back
end - a dedicated computer cluster specifically built for the purpose of being rented out.
✔This model is based on that used by conventional utilities such as telephone services, electricity and gas.
✔The backend infrastructure and computing resources management and delivery is governed by the provider.
✔Utility computing solutions consists of virtual servers, virtual storage, virtual software, backup and most IT
solutions.
✔Users assign a “utility” value to their jobs, where utility is a fixed or time-varying valuation that captures various
QoS constraints (deadline, importance, satisfaction).
✔Providers can choose to prioritize high yield (i.e., profit per unit of resource) user jobs, leading to a scenario where
shared systems are viewed as a marketplace, where users compete for resources based on the perceived utility or
value of their jobs.
✔Steps to establish utility computing :
Step 1: Determine the need
Step 2: Evaluate the service provider’s claims
Step 3: Assess the health of a computing resource
Step 4: Identify the resource provisioning requirements
Step 5: Map out a timeframe
Examples:

❖Travel reservation services

❖Online retailers

❖Startups and small businesses

Benefits (mainly in business models):

❖Removes the complexity of IT management

❖Saves valuable time & resources

❖Offers complete flexibility

❖Facilitates minimal financial layout and maximum savings

❖Allows shorter time to market


Hardware Virtualization
● Hardware virtualization allows running multiple operating systems and
software stacks on a single physical platform.

● A software layer, the virtual machine monitor (VMM), also called a


hypervisor, mediates access to the physical hardware presenting to each
guest operating system a virtual machine (VM), which is a set of virtual
platform interfaces

22
A hardware virtualized server hosting three virtual machines, each one
running distinct operating system and user level software stack. 23
Autonomic Computing
● Autonomic, or self-managing, systems rely on monitoring probes and
gauges (sensors), on an adaptation engine (autonomic manager) for
computing optimizations based on monitoring data, and on effectors to
carry out changes on the system.

● IBM’s Autonomic Computing Initiative has contributed to define the four


properties of autonomic systems:
○ Self-configuration
○ Self optimization
○ Self-healing
○ Self-protection.
24
Characteristics

Self-service, on-demand
A consumer can unilaterally provision computing capabilities
as needed automatically without requiring human Elasticity
interaction with each service provider. Capabilities can be elastically provisioned and
released to scale rapidly commensurate with
Network-based access demand. To the consumer, the capabilities
Capabilities are available over the network and accessed available for provisioning often appear to be
through standard mechanisms that promote use by unlimited
heterogeneous thin or thick client platforms

Resource pooling Pay-per-use


The customer has no control or knowledge over the The customer pay only for what he/she used.
details of the provided resources, that are managed by
the Cloud provider

25
History

27
Cloud Computing
INTRODUCTION

28
Cloud Computing can be defined as delivering computing power
(CPU, RAM, Network Speeds, Storage OS software, analytics, and
intelligence) a service over a network (usually on the internet) rather
than physically having the computing resources at the customer location.
● via the internet ("the cloud") - provides fast innovation, flexible
resources, and economies of scale.
● Users pay-as-you-go, which helps cut operating expenses, run
infrastructure more efficiently, and scale as business needs change.

Example: AWS, Azure, Google Cloud

29
National Institute of Standards and
Technology (NIST)

Cloud computing is a model for enabling


ubiquitous, convenient, on-demand
network access to a shared pool of
configurable computing resources (e.g.,
networks, servers, storage, applications,
and services) that can be rapidly
provisioned and released with minimal
management effort or service provider
interaction.”

30
31
Roots of Cloud Computing

32
SOA, Web Services, Web 2.0, and Mashups

● Web services can glue together applications running on different


messaging product platforms,enabling information from one
application to be made available to others.
● Enabling internal applications to be made available over the Internet.
● Over the years a rich WS software stack has been specified and
standardized.
● Resulting in a multitude of technologies to describe, compose, and
orchestrate services, package and transport messages between services,
publish and discover services, represent Quality of Service (QoS)
parameters, and ensure security in service access
33
34
Contd..

● SOA is to address requirements of loosely coupled standards-based, and


protocol-independent distributed computing.

● The concept of gluing services initially focused on the enterprise Web, but
gained space in the consumer realm as well, especially with the advent of Web
2.0.

35
Virtual Appliances and the Open Virtualization
Format
An application combined with the environment needed to run it
(operating system, libraries, compilers, databases, application containers,
and so forth) is referred to as a “virtual appliance.”

36
LAYERS AND TYPES OF CLOUDS

37
38
THE CLOUD COMPUTING STACK

39
SaaS

● Software as a service (SaaS) is a cloud computing offering that provides users


with access to a vendor’s cloud-based software

● SaaS services are usually a subscription model. The hardware and software is
provided by the vendor. All you need to do is login & get started.

● Saas alleviates the burden of software maintenance/support but users


relinquish control over software versions and requirements.

40
PaaS
● Platform as a Service (PaaS) is the cloud computing model that
provides platforms for testing, deployment, & managing
applications

● Built on virtualization technology

● Scaling up or down is easy to adjust to business changes

● Providers manage security, operating systems, server software and backups.

● Facilitates collaborative work even if teams work remotely


41
IaaS
● Infrastructure as a Service (IaaS) is the cloud service that provides basic
computing infrastructure like storage, servers, networking resources

● IaaS uses Virtual Machines (VMs) to house data instead of physical servers.

● 3 components
● Compute
● Storage
● Network

42
IaaS
N Compute
et GPU Shared
w HPC Hours / Month
or Object No contracts
Block Self service
k
File

44
45
Service Model

46
IaaS/PaaS/SaaS

47
TYPES OF CLOUD Types of clouds based on
deployment models

48
Types of Cloud
Private Cloud:
The infrastructure is procured for exclusive use by a single organization.
Management, operation, ownership, location of the private cloud, however, can be
independent by the organization using it.

Community Cloud:
The infrastructure is available to a community of organizations sharing a common
goal (for instance: mission, security requirements, adherence to common regulatory rules,
etc.)

49
Public Cloud:
The infrastructure is available to the public at large. Management can be either public or
private. The location is at some service supplier premises.

Hybrid Cloud:
The infrastructure is a combination of two or more Cloud infrastructures (private, public,
community Cloud), connected so that there is some form of portability of e.g. data or
applications

50
Private Vs Public
scalabilit
y

Control

51
Applications

52
Communication

Email
Skype and WhatsApp

53
Entertainment
Netflix

54
Productivity
➔ Microsoft Office 365
➔ Google Docs

55
Business Process

➔ Salesforce
➔ Hubspot
➔ Marketo

56
Backup and recovery

➔ Dropbox
➔ Google Drive
➔ Amazon S3

57
Chatbots-Cloud based AI solutions

➔ Siri
➔ Alexa
➔ Google Assistant

58
Application development

Amazon Lumberyard

59
Test and development

➔ LoadStorm
➔ BlazeMeter

60
Big data analytics
➔ Hadoop
➔ Cassandra

61
Social Networking

➔ Facebook
➔ LinkedIn
➔ MySpace
➔ Twitter

62
CLOUD INFRASTRUCTURE MANAGEMENT
● The software toolkit responsible for this orchestration is called a
virtual infrastructure manager (VIM)

● This type of software resembles a traditional operating


system—but instead of dealing with a single computer, it aggregates
resources from multiple computers, presenting a uniform view to
user and applications.

63
Features
● Virtualization Support
● Self-Service, On-Demand Resource Provisioning
● Multiple Backend Hypervisors.
● Storage Virtualization.
● Interface to Public Clouds.
● Virtual Networking.
● Dynamic Resource Allocation.
● Virtual Clusters.
● Reservation and Negotiation Mechanism.
● High Availability and Data Recovery.

64
Opportunities and Challenges
–It enables services to be used without any understanding of their
infrastructure.
–Cloud computing works using economies of scale:
❖It potentially lowers the outlay expense for start up companies, as they would no longer
need to buy their own software or servers.
❖Cost would be by on-demand pricing.

❖Vendors and Service providers claim costs by establishing an ongoing revenue stream.

–Data and services are stored remotely but accessible from “anywhere”.

65
–Use of cloud computing means dependence on others and that could possibly
limit flexibility and innovation:

–Security could prove to be a big issue:


•It is still unclear how safe out-sourced data is and when using these services
ownership of data is not always clear.
–There are also issues relating to policy and access:
•If your data is stored abroad whose policy do you adhere to?
•What happens if the remote server goes down?
•How will you then access files?
•There have been cases of users being locked out of accounts and losing
access to data.

66
Advantages of Cloud Computing
Lower computer costs:
–No need of a high-powered and high-priced computer to run cloud computing's
web-based applications.

– Applications run in the cloud, not on the desktop PC, your desktop PC does not
need the processing power or hard disk space demanded by traditional desktop
software.

–When using web-based applications, PC can be less expensive, with a smaller hard
disk, less memory, more efficient processor.

67
Improved performance:
–With few large programs hogging computer's memory, better performance from PC.
–Computers in a cloud computing system boot and run faster because they have fewer
programs and processes loaded into memory.

Reduced software costs:


–Instead of purchasing expensive software applications, you can get most of what you need
for free.
–better than paying for similar commercial software.

68
Instant software updates:
–Another advantage to cloud computing is that you are no longer faced with
choosing between obsolete software and high upgrade costs.

Improved document format compatibility:


–No worry about the documents created on machine being compatible with
other users' applications or OS
–There are potentially no format incompatibilities when everyone is sharing
documents and applications in the cloud.

69
Unlimited storage capacity:
–Cloud computing offers virtually limitless storage.
–Your computer's current 1 Tbyte hard drive is small compared to the
hundreds of Pbytes available in the cloud.

Increased data reliability:


–Unlike desktop computing, in which if a hard disk crashes and destroy all
your valuable data, a computer crashing in the cloud should not affect the
storage of your data.

70
Easier group collaboration:
–Sharing documents leads directly to better collaboration.
–Many users do this as it is an important advantages of cloud computing

Device independence:
–You are no longer tethered to a single computer or network.
–Changes to computers, applications and documents follow you through the
cloud.
–Move to a portable device, and your applications and documents are still
available.
71
Disadvantages of Cloud Computing
Requires a constant Internet connection:
–Cloud computing is impossible if cannot connect to the Internet.

–Use the Internet to connect to both applications and documents, do not have an
Internet connection ,cannot access anything, even own documents.

–A dead Internet connection means no work and in areas where Internet connections
are few or inherently unreliable, this could be a deal-breaker.

72
Stored data might not be secure:
–With cloud computing, all your data is stored on the cloud.
–Can unauthorised users gain access to your confidential data?

Stored data can be lost:


–Theoretically, data stored in the cloud is safe, replicated across multiple
machines.
–But on the off chance that data goes missing, have no physical or local backup.

73
Data Centers
Data center (DC) is a physical facility that enterprises use to house computing and storage
infrastructure in a variety of networked formats.
Main components :

❖ Compute
❖ Storage
❖ Network
Main function is to deliver utilities needed
by the equipment and personnel:
◦ Power
◦ Cooling
◦ Shelter
◦ Security
Size of typical data centers:
◦ 500 – 5000 sqm buildings
◦ 1 MW to 10-20 MW power (avg 5 MW)
74
Example data centers

75
Datacenters around the globe

https://docs.microsoft.com/en-us/learn/modules/explore-azure-infrastructure/2-azure-datacenter-locations
76
Modern DC for the Cloud architecture

▪ Geography:
− Two or more regions
− Meets data residency requirements
− Fault-tolerant from complete region failures
▪ Region:
− Set of datacenters within a metropolitan area
− Network latency perimeter < 2ms
▪ Availability Zones:
− Unique physical locations within a region
− Each zone made up of one or more DCs
− Independent power, cooling, networking
− Inter-AZ network latency < 2ms
− Fault
Src: Inside Azure Datacenter
tolerance fromArchitecture
DC failurewith Mark Russinovich 77
Data Centers

▪ Traditional data centers


◦ Host a large number of relatively small- or medium-sized applications, each running on a dedicated
hardware infrastructure that is decoupled and protected from other systems in the same facility
◦ Usually for multiple organizational units or companies

▪ Modern data centers (a.k.a., Warehouse-scale computers)


◦ Usually belong to a single company to run a small number of large-scale applications
◦ Google, Facebook, Microsoft, Amazon, Alibaba, etc.
◦ Use a relatively homogeneous hardware and system software
◦ Share a common systems management layer
◦ Sizes can vary depending on needs

78
Material

https://community.fs.com/article/what-is-data-center-architecture.html

https://www.ibm.com/topics/data-centers#:~:text=businesses%20(SMBs).-,Data%20center%20architecture,
storage%2C%20networking%E2%80%94are%20virtualized.
Scale-up vs. scale-out

▪ Scale-up: high cost powerful CPUs, more cores, more memory


▪ Scale-out: adding more low cost, commodity servers
Supercomputer vs. data center

▪ Scale
◦ Blue waters = 40K 8-core “servers”
◦ Microsoft Chicago Data centers = 50 containers = 100K 8-core servers

▪ Network architecture
◦ Supercomputers: InfiniBand, low-latency, high bandwidth protocols
◦ Data Centers: (mostly) Ethernet based networks

▪ Storage
◦ Supercomputers: separate data farm
◦ Data Centers: use disk on node + memory cache 80
Main components of a datacenter

src: The Datacenter as a Computer – Barroso, Clidaras, 81


Traditional Data Center Architecture
Servers mounted on 19’’
rack cabinets

Racks are placed in single rows forming


corridors between them.

Src: the datacenter as a computer – an introduction to the design of warehouse-scale machines 82


A Row of Servers in a Google Data Center

Src: the datacenter as a computer – an introduction to the design of


warehouse-scale machines
83
Inside a modern data center

▪ Today’s DC use shipping containers packed with


1000s servers each.

▪ For repairs, whole containers are replaced.

84
Costs for operating a data center

Monthly cost = $3’530’920


▪ DCs consume 3% of global electricity supply
(416.2 TWh > UK’s 300 TWh)

▪ DCs produce 2% of total greenhouse gas


emissions

▪ DCs produce as much CO2 as The Netherlands


or Argenti

31% power

85
Power Usage Effectiveness (PUE)

▪ PUE is the ratio of


◦ The total amount of energy used by a DC facility
◦ To the energy delivered to the computing equipment

▪ PUE is the inverse of data center infrastructure efficiency

▪ Total facility power = covers IT systems (servers, network, storage) +


other equipment (cooling, UPS, switch gear, generators, lights, fans, etc.)

86
Achieving PUE
▪ Location of the DC – cooling and power load factor

▪ Raise temperature of aisles


◦ Usually 18-20 C; Google at 27 C
◦ Possibly up to 35 C (trade off failures vs. cooling costs)

▪ Reduce conversion of energy


◦ E.g., Google motherboards work at 12V
rather than 3.3/5V

▪ Go to extreme environments
◦ Arctic circle (Facebook)
◦ Floating boats (Google)
◦ Underwater DC (Microsoft)

▪ Reuse dissipated heat 87


Evolution of data center design
▪ Case study: Microsoft

https://www.nextplatform.com/2016/09/26/rare-tour-microsofts-hyperscale-datacente
rs/ 88
Evolution of datacenter design
▪ Gen 6: scalable form factor (2017)
− Reduced infrastructure, scale to demand
− 1.17-1.19 PUE

▪ Gen 7: Ballard (2018)


− Design execution efficiency
− Flex capacity enabled
− 1.15-1.18 PUE

▪ Gen 8: Rapid deploy datacenter (2020)


− Modular construction and delivery
− Equipment skidding and preassembly
− Faster speed to market

Src: Inside Azure Datacenter Architecture with Mark Russinovich 89


Data center architecture

Most modern data centers—even in-house on-premises data centers—have evolved from traditional IT architecture, where every application or workload runs on its own dedicated hardware, to cloud
architecture, in which physical hardware resources—CPUs, storage, networking—are virtualized. Virtualization enables these resources to be abstracted from their physical limits, and pooled into
capacity that can be allocated across multiple applications and workloads in whatever quantities they require.

Virtualization also enables software-defined infrastructure (SDI)—infrastructure that can be provisioned, configured, run, maintained and ‘spun down’ programmatically, without human intervention.

The combination of cloud architecture and SDI offers many advantages to data centers and their users, including the following:

• Optimal utilization of compute, storage, and networking resources. Virtualization enables companies or clouds to serve the most users using the least hardware, and with the least unused or idle
capacity.

• Rapid deployment of applications and services. SDI automation makes provisioning new infrastructure as easy as making a request via a self-service portal.

• Scalability. Virtualized IT infrastructure is far easier to scale than traditional IT infrastructure. Even companies using on-premises data centers can add capacity on demand by bursting workloads to the
cloud when necessary.

• Variety of services and data center solutions. Companies and clouds can offer users a range of ways to consume and deliver IT, all from the same infrastructure. Choices are made based on workload
demands, and include infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). These services can be offered in a private data center, or as cloud solutions in
either a private cloud, public cloud, hybrid cloud, or multicloud environment.

• Cloud-native development. Containerization and serverless computing, along with a robust open-source ecosystem, enable and accelerate DevOps cycles and application modernization as well as
enable develop-once-deploy-anywhere apps.
Data center Video Material References –
case study

IBM Cloud
https://youtu.be/HzugDzl2cfg

AWS Data Center


https://www.youtube.com/watch?v=q6WlzHLxNKI
Google Data Center
https://www.youtube.com/watch?v=zDAYZU4A3w0
Service Level Agreement (SLA)

•A commitment between a service provider and a client. Particular aspects of the service,
such as quality, availability, responsibilities are agreed upon between the service provider
and the service user.
•A bond for performance negotiated between the cloud services provider and the client.
It defines

•The metrics used to measure the level of service provided.


•Remedies or penalties resulting from failure to meet the promised service level
expectations.
SLA
CUSTOMER - BASED SLA

Agreement is used for individual customers and comprises all relevant services that a client may need while leveraging only
one contract

SERVICE – BASED SLA

A contract that includes one identical type of service for all of its customers

MULTI – LEVEL SLA

Agreement is customized according to the needs of the end-user company. It allows the user to integrate several conditions into
the same system to create a more convenient service.

•Corporate level: This SLA does not require frequent updates since its issues are typically unchanging. It
includes a comprehensive discussion of all the relevant aspects of the agreement and applies to all
customers in the end-user organization.

•Customer level: This contract discusses all service issues that are associated with a specific group of
customers. However, it does not take into consideration the type of user services.

•Service level: In this agreement, all aspects attributed to a particular service regarding a customer group are
included.
METRICS IN SLA
•Abandonment Rate: Percentage of calls abandoned while waiting to be answered.

•ASA(Average Speed to Answer): Average time (usually in seconds) it takes for a call to be answered by the service desk.

•Resolution time:The time it takes for an issue to be resolved once logged by the service provider.

•Error rate:The percentage of errors in a service, such as coding errors and missed deadlines.

•TSF(Time Service Factor): Percentage of calls answered within a definite timeframe, e.g., 80% in 20 seconds.

•FCR(First-Call Resolution): A metric that measures a contact center's ability for its agents to resolve a customer's inquiry or problem on
the first call or contact.

•TAT(Turn-Around-Time): Time is taken to complete a particular task.

•TRT(Total Resolution Time): Total time is taken to complete a particular task.

•MTTR(Mean Time To Recover): Time is taken to recover after an outage of service.

•Security:The number of undisclosed vulnerabilities, for example. If an incident occurs, service providers should demonstrate that they've
taken preventive measures.

•Uptime is also a common metric used for data services such as shared hosting, virtual private servers, and dedicated servers. Standard
agreements include the percentage of network uptime, power uptime, number of scheduled maintenance windows, etc.
The contract should have a detailed plan for its modification, including change frequency, change procedures, and changelog.

1. SLA Calculation

SLA assessment and calculation determine a level of compliance with the agreement. There are many tools for SLA calculation available
on the internet.

2. SLA uptime

Uptime is the amount of time the service is available. Depending on the type of service, a vendor should provide minimum uptime relevant
to the average customer's demand. Usually, a high uptime is critical for websites, online services, or web-based providers as their business
relies on its accessibility.

3. Incident and SLA violations

This calculation helps determine the extent of an SLA breach and the penalty level foreseen by the contract. The tools usually calculate a
downtime period during which service wasn't available, compare it to SLA terms and identify the extent of the violation.

4. SLA credit

If a service provider fails to meet the customer's expectations outlined in the SLA, a service credit or other type of penalty must be given as
a form of compensation. A percentage of credit depends directly on the downtime period, which exceeded its norm indicated in a contract.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy