Course3 - Cloud Digital Leader
Course3 - Cloud Digital Leader
https://cloud.google.com/training/business
Many traditional enterprises use legacy systems and applications that often struggle to
achieve the scale and speed needed to meet modern customer expectations. Business
leaders and IT decision makers constantly have to choose between maintenance of legacy
systems and investing in innovative new products and services. This course explores the
challenges of an outdated IT infrastructure and how businesses can modernize it using cloud
technology. It begins by exploring the different compute options available in the cloud and
the benefits of each, before turning to application modernization and Application
Programming Interfaces (APIs). The course also considers a range of Google Cloud
solutions that can help businesses to better develop and manage their systems, such as
Compute Engine, App Engine, and Apigee.'
This is the third course in the Cloud Digital Leader series. At the end of this course, enroll in
the Understanding Google Cloud Security and Operations course
When you complete this course, you can earn the badge displayed above! View all the
badges you have earned by visiting your profile page. Boost your cloud career by showing
the world the skills you have developed!
Introduction
Welcome to Infrastructure and Application Modernization with Google Cloud! In this module,
you'll meet the instructor, learn about the course content, and how to get started.
Saman: Hello, and welcome to the Value of Infrastructure and Application Modernization
with Google Cloud.
I'm Saman Javan, lead course developer and certified facilitator here at Google Cloud.
Consumer expectations over the last 20 years have radically changed.
Customers now expect connected digital experiences in real time.
Many businesses, especially large traditional enterprises, built their IT infrastructure on
premises.
Legacy systems and applications make up the organization's IT backbone.
At the same time, these legacy systems and applications struggle to achieve the scale and
speed needed to meet modern customer expectations.
Business leaders and IT decision makers constantly have to choose between
maintenance of legacy systems, and investing in innovative new products and services.
In this course, I'll explore the challenges of an outdated IT infrastructure, and then
describe how businesses can modernize that infrastructure using cloud technology.
In Module One, I'll introduce infrastructure modernization as the core topic, In particular, I'll
examine compute options available in the cloud and the benefits of each.
I'll also present a few Google cloud solutions and highlight customers who have
successfully used them.
In Module Two, I'll focus on application modernization.
Applications are not new in the cloud.
But cloud technology enables businesses to develop, deploy, and update applications with
speed, security and agility built it.
I'll also cover App Engine, a Google Cloud solution that lets application developers build
scalable web and mobile applications on a fully managed serverless platform.
In the third module, I'll present application programming interfaces, or APIs, and explain
how they unlock value from legacy systems, enable businesses to create new value, and
monetize new services.
I'll also cover Apigee, a Google Cloud Platform for developing and managing APIs.
I'll then close the course summarizing the key points and offer additional resources for you
to continue your learning.
And remember, you don't need to be an IT specialist to create new business value or to
develop innovative services.
By understanding how infrastructure, applications, and APIs work together, you can initiate
conversations about new projects and be more knowledgeable about strategic planning for
digital transformation.
We'll test your knowledge throughout the course with graded assessments, which you'll
need to pass to receive credit.
Let's jump in.
This module covers what it means to modernize an IT infrastructure and why it matters. It
then turns to the different compute options available, before moving onto private, hybrid and
multi-cloud architectures. It ends with an overview of the Google Cloud global infrastructure.
Introduction:
Saman: Hello, and welcome to the first module: Modernizing IT Infrastructure with Google
Cloud.
New businesses born in the cloud are challenging old business models.
Scale is no longer a competitive advantage; it's the norm.
Many organizations are very aware of this threat coming from digital disruption.
What organizations want to know is how to best respond to this threat.
How can they survive and thrive in this new cloud era?
Central to an organization's ability to thrive in the new era is the way in which they
structure and use their IT resources.
This could mean moving away from investing resources to run and maintain existing IT
infrastructure to focusing more on creating new higher value products and services.
With Cloud, organizations can develop and build new applications to drive better
engagement with customers and employees faster, securely, and at scale.
And ultimately, leveraging cloud technology to truly transform a business requires new
collaborative models, changing culture and processes, and enabling team productivity and
innovation.
Enterprises are also seeing significant financial benefits from adopting Cloud as their
approach to IT moves from buying fixed capacity to paying only for what they use,
changing the economics of technology investment.
For many businesses, infrastructure modernization is the foundation for digital
transformation.
And with that, here's what I'll cover in this module.
I'll begin by explaining what it means to modernize an IT infrastructure and why it matters.
Then the different compute options available.
Next, I'll cover private, hybrid, and multi-cloud architectures and what we mean by each of
them.
I'll briefly go over Google Cloud's global infrastructure, and close with Google Cloud
compute solutions for setting up or modernizing the IT infrastructure.
Remember, even if you're not in an IT or technical role, understanding this foundation will
help you identify how you can support or drive your organization's cloud adoption goals.
So let's get started.
Infrastructure Modernization
person: For most organizations, owning and operating infrastructure does not differentiate
their business.
In fact, it's often a burden.
It limits an organization staff in several ways.
For example, they have to undertake laborious tasks related to infrastructure procurement,
provisioning and maintenance.
They are using legacy systems that are old, don't add value to the business other than
keeping the lights on, and don't support business change.
They cannot scale with any ease because they're locked into what they have on premises
and forced to pay to over provision for peak usage.
One option for reducing this burden is to outsource the company's IT infrastructure as
much as possible, and migrate to the cloud.
But before we talk about migrating to the cloud, let's go back to a time before the cloud.
I'll demonstrate how technology has impacted company business models over the years
and use a simplified IT backbone to talk through the various changes.
First, let's look at employees, the technology users.
These people use or create applications on laptops or computers.
And as part of their day to day work, they're storing data or files and connected to each
other over the internet.
As a company grows, and there's a need for more computers with more processing power,
a company might have a data center with servers.
Organizations might own their servers, data centers, cooling systems, the physical security
features in place and the real estate to house all of that infrastructure.
On top of this, they have to pay for maintenance and ongoing security costs.
Think of this as similar to owning a house.
You're responsible for all of the infrastructure, the bricks and mortar, the fence around your
garden, the locks on your door and all of the ongoing costs such as your utilities as well.
The first step in moving away from what we call an on premises infrastructure is
colocation.
Here, a business sets up a large data center, then other organizations rent part of that
data center.
This means organizations no longer have to pay for the cost associated with hosting the
infrastructure, but they still need to pay to maintain it.
It's like owning an apartment in a serviced apartment complex or a house in a gated
community.
You've paid for some infrastructure, the apartment or the house, and you're still
responsible for maintenance-- for example, if your heater breaks down-- but some things
like the perimeter security are outsourced.
With both on premises and colocation, value creation only starts well after a substantial
amount of capital expenditure or capex is committed.
Given that hardware is often heavily underutilized even in the colocation model, engineers
found a way to package applications and their operating systems into what we call a virtual
machine.
Virtual machines share the same pool of computer processing, storage and networking
resources.
Virtual machines optimize the use of available resources and enable businesses to have
multiple applications running at the same time on a server in a way that is efficient and
manageable.
Most companies use virtual machines to optimize their use of data centers, whether on
premises or co located.
The problem though, is that there's still a cap to the physical capacity of existing servers,
and companies still have to commit to a substantial amount of capital expenditure upfront.
Many companies are now outsourcing their infrastructure entirely.
They are growing to deliver their products and services to customers regionally and
globally, and need to scale quickly and securely.
Setting up and maintaining data centers and network connections that are optimal for their
needs is expensive.
They don't see the benefit of owning their own data centers if they can outsource to a
public cloud that offers Infrastructure as a service.
In our analogy, this is like renting an apartment in a service building.
Now if your heater breaks, it's your landlord who's responsible for getting it fixed.
This means IT costs shift from being capital expenditure heavy to being more operational
expenditure heavy.
Outsourcing your IT needs at the infrastructure level is called infrastructure as a service.
And public cloud providers such as Google Cloud offer several services to help you
modernize your infrastructure.
If your organization chooses to, it can move some or all of its infrastructure away from
physical data centers to virtualized data centers in the cloud.
Google Cloud provides you with compute, storage, and network resources, organized in
ways familiar to you from your experience with physical and virtualized data centers.
The maintenance work is outsourced to the public cloud providers, so it's easier to shift
larger portions of company expertise to build processes, and applications that move the
business forward.
Outsourcing IT resources gives the company flexibility, but requires its teams to continue
managing things like web application security.
That is the information security that specifically deals with websites, web applications, and
web services.
In this scenario, you would pay for resources you allocate-- for example, a set number of
virtual machines.
If you want a more managed service, cloud service providers offer something called a
platform as a service.
In this case, you don't have to manage the infrastructure and, for some services, you only
pay for what you use.
As cloud computing has evolved, the momentum has shifted even further towards
managed automated infrastructure and services.
Google Cloud, for instance, is known for its global access to a pool of configurable
resources for every layer of the IT infrastructure in the form of paid services.
All right, now that you understand infrastructure as a service, and platform as a service,
let's look more closely at compute options.
I already mentioned virtual machines as one method for optimizing the use of IT
resources.
In the next video, I'll examine VMs further and explore alternatives.
person: In the last video, I covered some of the key advantages of using public cloud
services to modernize or even set up your IT infrastructure.
First, cloud reduces the need for IT teams to act as a gateway to technical resources such
as network security, storage, compute power, and data.
Think of the cloud as an on demand self-service for anyone in the business.
Next, there is a broad network access.
This means that access to data and compute resources is no longer tied to a particular
geography or location.
Now teams can access compute resources and data with little to no latency.
Third, resources are distributed across a global network of data centers.
If one is down due to a natural disaster, for instance, another data center is available to
prevent service disruption.
This is referred to as resource pooling.
Next, companies can scale up or down instantly due to the availability of on demand cloud
resources.
This rapid elasticity means businesses can serve their customers without interruption in a
cost effective way.
And finally, cloud is a measured service, which means companies have a lower upfront or
capital expenditure because they don't have the need to purchase their own data center
equipment or maintain their IT infrastructure.
If you've decided to modernize your business IT infrastructure, you might be wondering
what options are available to you.
In this video I'll explore the three main options that you can use to modernize your
infrastructure, virtual machines, containerization, and serverless computing.
I'll also touch on Kubernetes, a solution for managing your services and machines.
First, let's make sure we have a shared understanding of key terms.
In the context of the cloud, compute or computing refers to a machine's ability to process
information to store, retrieve, compare and analyze it, and automate tasks often done by
computer programs, otherwise known as software or applications.
Traditionally, the hardware available for computing could only run a limited amount of
software and applications.
As you learned in the last video virtualization changed this.
Virtualization is a form of resource optimization that allows multiple systems to run on the
same hardware.
These systems are called virtual machines, or VMs.
This means they share the same pool of computer processing, storage and networking
resources.
VMs enable businesses to have multiple applications running at the same time on a server
in a way that is efficient and manageable.
The software layer that enables this is called a hypervisor.
A hypervisor sits on top of physical hardware and multiple VMs are built on top of it.
It's like having multiple computers that only use one piece of hardware.
Virtual machines are the first compute option for infrastructure modernization.
The second is containers.
Containers follow the same principle as virtual machines.
They provide isolated environments to run your software services and optimize resources
from one piece of hardware.
However, they're even more efficient.
Virtual machines recreate a full representation of the hardware.
By contrast, containers only recreate or virtualize the operating systems.
This means that they only contain exactly what's needed for the particular application that
they support.
Containers offer a far more lightweight unit for developers and IT operations teams to work
with and provide a range of benefits.
They start faster, and use a fraction of the memory compared to booting an entire
operating system.
Containers give developers the ability to create predictable environments that are isolated
from other obligations.
Let me use an analogy to explain the advantage of containers.
Suppose you want to build an apartment block.
One way to do this is to start with the steel beams, then build the outside walls, then run
the electricity and plumbing, then build the interior walls.
However, if you discover a fault somewhere in the building, it can be very difficult to isolate
the problem because everything is connected.
Adjusting features of each apartment or fixing problems can be challenging and
expensive.
Another way of doing Building an apartment block is to use prefabricated units.
In other words, you build the units off site, and then essentially lay them on top of each
other.
This means that any problem that arises is easier to isolate and fix.
It also means that individual apartments can have unique designs with different features
because they're all compartmentalized, rather than one giant unit.
This is what containers do for your applications.
So if a customer asked for a new feature, or a change in the application, your developers
can easily make an update to that particular part of the application without affecting the
rest.
Containers are able to run virtually anywhere, which makes development and deployment
easy.
They can run on Linux, Windows and Mac operating systems on virtual machines.
bare metal, which means directly on the hardware, on a developer's machine, or in data
centers on premises, and of course, in the public cloud.
Containers improve agility, strength and security, optimize resources and simplify
managing applications in the cloud.
Many businesses have a mix of VMs and containers.
However, as their IT infrastructure setup becomes more complex, businesses need a way
to manage their services and machines.
For example, businesses can have millions and millions of containers.
This means that keeping them secure and making sure that they operate efficiently can
require significant oversight, and management.
Kubernetes is an open source cluster management system that provides automated
container orchestration.
In other words, Kubernetes simplifies the management of your machines and services for
you.
This improves application reliability, and reduces the time and resources you need to
spend on development and operations, not to mention the relief from the stress attached
to these tasks.
Kubernetes makes everything associated with deploying and managing your application
easier.
We'll explore Kubernetes and Google Kubernetes engine more in Module Two when we
examine application development.
Finally, the third compute option is serverless computing.
Serverless computing doesn't mean there's no server though.
Serverless computing means that resources such as compute power are automatically
provisioned behind the scenes as needed.
This means that businesses do not pay for compute power unless they're actually running
a query or application.
At its simplest, serverless means that businesses provide the code for whatever function
they want, and the public cloud provider does everything else.
Let me give you an example.
Imagine you're a healthcare technology company.
You help general practice doctors to seamlessly connect with their patients.
One tool you provide is an application for patients to book appointments with their doctor.
You want to add a feature that enables patients to upload an image with their appointment
booking.
In this case, the ability to upload an image is called a function.
You, as the healthcare technology company, write the code for that function directly into
your public cloud platform.
The public cloud provider manages everything else.
For this reason, serverless computing solutions are often called function as a service.
Some functions are a response to specific events like file uploads to your cloud storage or
changes to your database records.
You write the code that defines the response to those events, and your cloud provider
does everything else.
Ultimately, every business has different compute requirements based on where they are in
their cloud adoption journey.
As such, determining the right blend of compute solutions is a necessary part of any
business cloud strategy.
Now, before we talk about Google Cloud compute solutions, I want to cover a key
dimension of your cloud strategy.
That is your service architecture.
I'll explain more in the next video.
[upbeat music] Saman: Today, most of the world's enterprise computing still happens on-
premises.
It hasn't moved to the cloud yet because path forward is complex, daunting, and full of
difficult decisions.
Sometimes workloads remain on-premises due to compliance or operational concerns.
So how do you modernize the infrastructure you have without jumping completely to the
cloud?
How do you bridge incompatible architectures while you transition?
How do you maintain flexibility and avoid lock-in?
Although there are many benefits to developing cloud first or cloud native applications and
systems, many enterprises have complex need that will involve some on-premises
infrastructure working in conjunction with public cloud services provided by companies like
Google Cloud.
Before we go any further though, let's make sure we're using a standard definition for the
following terms: Private cloud, hybrid cloud, and multi-cloud.
Private cloud is where an organization has virtualized servers in its own data centers to
create its own private on-premises environment.
This might be done when an organization has already made significant investments in its
own infrastructure or if, for regulatory reasons, data needs to be kept on-premises.
Hybrid cloud is when an organization is using some combination of on-premises or private
cloud infrastructure and public cloud services.
This is the situation many organizations are currently in.
Some of their data and applications have been migrated to the cloud.
Others remain on-premises and interconnects between the private and public clouds allow
interoperability.
Multi-cloud is where an organization is using multiple public cloud providers as part of its
architecture.
In this case, the organization needs flexibility and secure connectivity between the
different networks involved.
An organization might choose to use either hybrid cloud or multi-cloud if they want to
incorporate specific elements of a public cloud in order to take advantage of the key
strengths of that provider.
For example, many organizations see enormous benefits from Google's BigQuery data
analytics tool, a serverless application that scales to multi-petabyte data sets, but may
keep the core applications generating data that needs to be processed on-premises.
When organizations are considering a move to a hybrid cloud or multi-cloud situation, they
are often concerned about how easy it will be to move an application from one cloud to
another.
Google believes that being tied to a particular cloud shouldn't get in the way of you
achieving your goals.
Instead, Google believes in an open cloud where users have the rights to move their data
as they choose.
If organizations have the power to deliver their apps to different clouds while using a
common development and operations approach, this will help them meet their business
priorities and rapidly accelerate innovation.
Open source in the cloud preserves an organization's control over where they deploy their
IT investments.
Let's look at some examples.
Because Google Cloud uses open APIs, Google services are compatible with open-source
services and products.
This means you can take the code from, let's say, Google's Cloud Bigtable, a manage
database, and use that code somewhere else.
Because Google Cloud publishes key elements of its technology using open-source
licenses, customers can use products both on-premises and on multiple clouds.
One example of an open-source service you may have heard of is TensorFlow, and open-
source software library for machine learning developed inside Google.
Another you may have heard of is Kubernetes, a system for automating application
deployment, scaling, and management using a concept known as containerization.
Finally, Google Cloud has created Anthos, an open application modernization platform that
enables you to modernize your existing applications, build new ones, and run them
anywhere.
It allows you to build an application once and run it wherever you want, on-premises, on
Google Cloud, on a different public cloud.
This will help accelerate application development for your organization.
These examples of open-source solutions in the cloud enable businesses to leverage
Google Cloud infrastructure and deploy applications using Google Cloud's solutions on-
premises and/or using another cloud provider.
The reliability and resilience of the cloud infrastructure is critical to business operations
and success.
Now, another key component of a cloud strategy is a secure network.
Google's network carries as much as 40% of the world's internet traffic every day.
In fact, Google's network is the largest of its kind on Earth, and Google has invested
billions of dollars over the years to build it.
Google Cloud customers are able to run their applications and services on the same
infrastructure that Google uses to serve billions of users around the world.
The network is truly global, operating in over 200 countries and territories with 20 regions
and over 130 points of access.
This means that customers benefit from a private, well-provisioned, highly reliable global
network.
Now, you might be considering multiple factors as part of your cloud strategy, such as
cost, security, openness, and of course, the value of available products and services.
Perhaps like us at Google, you're taking the environment into consideration, too.
By moving compute from a self-managed data center or colocation facility to Google
Cloud, the net emissions directly associated with your company's compute and data
storage will be zero.
Why?
Because Google Cloud matches 100% of the energy consumed by our global operations
with renewable energy and maintains a commitment to carbon neutrality.
So when you use Google Cloud to store your data and develop your applications, for
example, your digital footprint is offset with clean energy, which reduces your impact on
the environment.
The takeaway is that every organization needs to think about their cloud strategy and
understand the available options.
Google Cloud provides a range of infrastructure solutions to help business modernize and
better serve their customers.
In the next video, I'll cover what those solutions are by category.
Google Cloud Compute solutions
[upbeat music] Saman: So far, you've learned about the benefits of infrastructure
modernization, the various compute options available, including virtual machines,
containerization, and serverless computing.
You've learned the difference between private, hybrid, and multi-cloud strategy and the
benefits of the global infrastructure that Google Cloud provides.
Now let's look at some specific Google Cloud solutions.
In this video, I'll cover VM-based compute options, including Compute Engine, Google
Cloud VMware Engine, and Bare Metal.
Next, I'll look at Google Kubernetes Engine, or GKE, which is a container-based compute
option.
Finally, I'll explore three serverless computing solutions, App Engine, Cloud Functions,
and Cloud Run.
Let's start with Compute Engine, which is a computing and hosting service that lets you
create and run virtual machines on Google's infrastructure.
Compute Engine delivers scalable, high performance virtual machines running in Google's
innovative data centers and worldwide fiber network.
Compute Engine VMs boot quickly, come with persistent disk storage, and deliver
consistent performance.
This solution is ideal if you need complete control over the virtual machine infrastructure.
It's also useful if you need to run a software package that can't easily be containerized or
have existing VM images to move to the cloud.
To better understand how Compute Engine works, let's turn to an example of a company
that used this option to overcome challenges and scale their business.
Spotify had reached a tipping point with their current business model where it wouldn't be
able to scale any further.
By leveraging Compute Engine, Spotify was able to effortlessly scale their business to
reach millions of users.
Google Cloud has allowed Spotify to build the audio network of the future and continue
innovating, all while providing users with billions of unique experiences.
Another VM-based solution is Google Cloud VMware Engine.
This is a type of software that you can run on a virtual machine.
Google Cloud VMware Engine is a fully managed service that lets you run the VMware
platform in Google Cloud.
Google manages the infrastructure, networking, and management services, so that you
can use the VMware platform efficiently and securely.
An example of a company that uses Google Cloud VMware Engine is DBG, one of the
world's leading exchange organizations.
They use Google Cloud as the foundation for a scalable, resilient, and compliant
infrastructure for financial markets.
Using Google Cloud VMware Engine, DBG was able to spin up a new private cloud in
under 40 minutes with minimal disruption.
This enabled them to scale their business on demand and meet customer needs while still
using their VM tools and existing processes.
The final VM-based compute solution we'll cover today is Bare Metal.
You can migrate many existing workloads to the cloud easily.
However, some specialized workloads are difficult to migrate to a cloud environment.
These workloads require hardware, and complicated licensing, and support agreements.
Bare Metal enables you to migrate specialized workloads to the cloud while maintaining
your existing investments and architecture.
This allows you access to and integration with Google Cloud services with minimal latency.
Next, let's look at the Google Cloud container-based solution, Google Kubernetes Engine,
often shortened to GKE.
Google Kubernetes Engine, or GKE, provides a managed environment for deploying,
managing, and scaling your containerized applications using Google infrastructure.
The GKE environment consists of multiple machines, specifically Compute Engine
instances, grouped together to form a cluster.
GKE allows you to securely speed up app development, streamline operations, and
manage infrastructure.
An example of a company that used GKE to improve their business is Current.
Current is a financial technology company that offers a debit card and app made for
teenagers.
Current uses GKE to improve time to market for application development by 400% while
eliminating downtime for users.
Finally, I'll cover three serverless computing solutions.
Let's start with App Engine.
Google App Engine is a platform as a service and cloud computing platform for developing
and hosting web applications.
App Engine lets app developers build scalable web and mobile backends in any
programming language on a fully managed serverless platform.
This means app developers can focus on writing code without having to manage the
underlying infrastructure.
IDEXX Laboratories, Inc. develops and manufactures veterinary care products and
technologies, including diagnostic tools and information technology.
IDEXX used Google App Engine to launch VetConnect Plus, an app that gives
veterinarians anytime, anywhere access to clinical decision support data that keeps pets
healthy.
By leveraging Google App Engine, IDEXX Laboratories was able to save up to $500,000
in annual IT spend.
Now let's look at another serverless computing solution, Cloud Run.
Cloud Run allows developers to build applications in their favorite programming language
with their favorite dependencies and tools and deploy them in seconds.
Cloud Run abstracts away all infrastructure management by automatically scaling up and
down from zero almost instantly depending on user traffic.
But how can Cloud Run improve businesses and offer real-world solutions?
Well, Veolia Group provides access to water, waste, and energy resources for millions of
people across 52 countries.
It develops sustainable solutions to preserve and replenish these resources across
communities and industries.
By leveraging Cloud Run for their algorithms, Veolia has benefited from automatic scaling,
multiple route support, and fast deployments, all while saving money.
The third serverless compute solution is Cloud Functions.
Cloud Functions is a serverless execution environment for building and connecting cloud
services.
It offers scalable, pay-as-you-go functions as a service to run your code with zero server
management.
Cloud Functions offers a simple and intuitive developer experience.
You or your developers can simply write your code and let Google Cloud handle the
operational infrastructure.
With Cloud Functions, developers are also more agile as they can write and run small
code snippets that respond to events.
Lucille Games is a good example of a company that optimized their business by
harnessing Cloud Functions.
Lucille Games used Cloud Functions and other Google Cloud solutions to build apps, run
servers, and create original games that can scale to millions of users on demand.
As I mentioned before, infrastructure modernization serves as the foundation for digital
transformation.
It's important to think carefully about your cloud strategy and what compute options you
can leverage.
How you build your architecture influences how your business harnesses applications,
manages data, and ultimately develops and thrives with this ever-evolving digital age.
Whether you're able to embrace innovation or whether you're constrained by your cloud
environment is determined by the choices you make now.
In the next module, we'll explore another important factor in your cloud adoption journey,
application development.
Leveraging the right applications in your business can transform how you work and unlock
new value.
And application development in the cloud doesn't belong exclusively to the IT team.
Quiz 1:
1.
Which specific cloud computing feature helps businesses serve their customers without
service interruption and in a cost-effective way? Select the correct answer.
2.
What do containers recreate or virtualize? Select the correct answer.
Hypervisor
Operating systems
Virtual machines
Hardware
3.
Aarav is a Chief Technical Officer and is considering using public cloud services,
specifically to modernize their company’s IT infrastructure. Which of the following can
Aarav use to build a business case for using an Infrastructure-as-a-Service (IaaS)
solution? Select the correct answer.
4.
App Engine, Cloud Functions and Cloud Run are all what type of Google Cloud compute
option? Select the correct answer.
Hybrid computing
Serverless computing
VM-based computing
Software computing
5.
A national hotel chain is using a combination of on-premises data centers and public cloud
services for their IT infrastructure. What type of IT infrastructure model is this? Select the
correct answer.
Colocation
Hybrid cloud
Multi-cloud
Virtualization
This module explores how businesses can modernize their existing applications and build
new ones in the cloud. It focuses on five common change patterns for businesses who want
to modernize their applications. Next, the module turns to some key application development
challenges that businesses face, before highlighting two Google Cloud solutions for these
challenges: Google Kubernetes Engine and App Engine.
Introduction:
Saman: And now onto Module 2, "Modernizing Applications with Google Cloud."
In the last module, you learned about infrastructure, which is the IT backbone of any
business.
In this module, we'll focus on applications.
First, what are applications exactly?
Simply put, applications are computer programs or software that help users do something.
And in this digital age, they're everywhere.
Think about how many applications you interact with on a day-to-day basis.
You check your emails or scroll through your social media via an app.
Perhaps you track your fitness with wearable technology that links to an app on your
phone.
You might even create and share content with your colleagues via specific applications.
The list is endless.
Customers now expect intuitive, well-functioning applications that enable them to do things
faster, and a business's capacity to meet that demand influences their ability to thrive in
the cloud era.
Applications, however, aren't possible because of cloud technology.
Applications have been developed on-premises for years and still are.
But on-premises application development often slows businesses down.
This is because deploying an application on-premises can be time-consuming and require
specialized IT teams, and any new changes can take six months or even more to
implement.
This then creates friction within different parts of the business.
For example, customer facing teams that want specific features might be delayed by
developers who struggle to make updates fast enough or developers that really want to be
innovative and try new things might be inhibited by operations teams that are concerned
about the stability of existing applications.
Cloud technology enables businesses to develop and manage applications in new ways,
so they're more agile and responsive to user needs.
In other words, businesses are able to develop applications and deliver updates quickly
and respond to the needs of their customers rapidly.
In this module, I'll explore how businesses can modernize their existing applications and
build new ones in the cloud.
In particular, I'll cover five common change patterns for businesses who want to modernize
their applications.
Next, I'll consider some key application development challenges that businesses face.
Then I'll highlight two Google Cloud solutions, Google Kubernetes Engine and App
Engine.
Let's get started.
person: It's often assumed that modernizing an application with cloud technology can only
be done in one way; move everything to the cloud all at once.
Yikes.
That can be risky, especially for large applications.
The good news is that's just one approach.
Moving an application to the cloud doesn't need to be done all at once.
Google Cloud has identified five common patterns that businesses can adopt when they
want to modernize their applications.
A business can move applications to the cloud first and then change them, or they can
change their applications before they move, or they can invent in greenfield, or invent in
brownfield, or they can just move their applications without any changes.
Let's look at each of these in turn.
If an organization wants to take a relatively conservative approach to modernizing
applications with the cloud, they might take a move first then change approach.
This path typically starts with a lift and shift program for selected applications.
The migration of these applications typically brings minimal changes to the ways of
working within the organization, but once the applications are running in the cloud, they
are then ready to be updated more easily than when they were running on premise.
For example, a legacy application that is moved to the cloud could have its security
improved by using the enhanced firewall and identity access management, IAM,
capabilities of Google Cloud.
Over time, further modernization can be explored potentially using APIs to change the way
that the application interacts with data and other applications, or even making the
application serverless so that it can become cloud native event-driven application, the
most efficient form of application architecture.
After the first set of applications have been re-architected and optimized in the cloud,
further applications can be moved.
Think of this like renovating your house to maximize your space.
You don't have to renovate every room all at once and you don't want to completely redo
every room either.
You can start to make changes as you're ready for them based on your needs and budget.
Suppose, for instance, you want to start with the kitchen.
You could replace the kitchen cabinets and countertops and still continue to use the
electronic oven as is.
Eventually, after you've put in a gas line, you can then replace your electric oven with a
gas range.
If an organization wants to take a more aggressive approach to modernizing its
applications, they can re-architect applications first to make them more cloud ready before
migrating them.
For our analogy, that might mean completely changing the design of the kitchen and the
placement of the appliances for maximum efficiency and buying brand new appliances for
the new design before doing the renovation work.
For some organizations, their initial interest in the cloud is because they want the ability to
build new, innovative applications quickly.
They may not want to or be ready to move existing applications at this point.
So when we talk a greenfield strategy, we're talking about building an entirely new
infrastructure and applications in the cloud.
It's like creating an office and buying furniture for it as part of your renovation project when
you don't currently have an office in your existing home.
This approach really only applies when an organization needs to develop new products or
offerings, such as a B2C bank that wants to develop its digital banking channel.
The organization doesn't need to touch older applications just yet.
They could take either the move and change or change and move approach if they decide
to modernize them at a later point.
Inventing in greenfield allows you to build that innovative application that will help drive the
business forward, but it does require agility, access to a diverse development skillset, and
strong support from leadership.
A brownfield strategy, on the other hand, is to invent a new application in the cloud
environment that will replace an existing legacy application that remains on premises.
The legacy application is only retired after the new application is built.
In our analogy, it's like creating a new office in your house while continuing to use a
cluttered desk space in the corner of a living room.
You don't move any furniture or reorganize your documents until you know the new office
space is set up.
Although this redundancy can be comforting through minimizing risk, especially for mission
critical applications, there are increased costs associated with running applications in both
places.
Finally, it's worth noting that building cloud native applications isn't appropriate for all
scenarios.
For some use cases, It's efficient to leverage the cloud just to modernize the infrastructure
layer as we discussed in the previous module.
One possible use case is cloud storage for data to allow organizations to decommission
on premises data centers.
Another use case is modernizing the infrastructure only to allow organizations to create a
virtualized environment for disaster recovery.
Over the next few videos, we'll look at how the cloud can support application development
and maintenance.
[music] Person: Many businesses professionals share similar concerns around application
development processes and timelines.
Creating a new application within an organization can be a challenge.
Have you had the experience of going to your tech team and suggesting a new
application, only for them to tell you it will take 18 months or maybe even tell you it's not
possible with the legacy systems already in place?
Traditionally, when business professionals want a new application, the tech or IT team has
to do a lot of work to identify features, estimate capacity, define a technical architecture,
consider integration with other systems, and allocate resources even before a line of code
is written.
Once the requirements are agreed on, a new application will have to be designed, built,
tested, integrated, and deployed.
But new needs often compete with existing projects for time and resources.
For some teams, this means spending just as much time creating and managing
environments as is spent building business value.
Whether building an app on premises or in the cloud, developers still need to make
decisions about overall network architecture, choice of database, and type of server.
All of these can slow down the application development process and even the launch of
applications.
The challenges for building apps using an on premises infrastructure can outnumber those
of cloud native apps and can often be frustrating for developers and business
professionals.
Developers want to be creative and innovative by building new solutions, not spending
hours maintaining the infrastructure.
When developers get too far removed from the task they enjoy, naturally, they start to
seek out more interesting job opportunities that allow them to focus on building new apps
and technologies.
In addition to losing a key team member that needs to be carefully replaced quickly, the
organization loses the intangible knowledge that good developers take with them when
they leave.
Developing cloud native applications avoids the hassle of trying to create something that is
constrained by legacy systems and outdated processes.
Building a new application in the cloud means you can be more agile in your development.
It frees teams up from worrying about environment so they can focus on creating features
which is where customers will get real value.
Updating already existing applications that have been typically built on premises presents
difficulties, too.
Often, an application has been built with a monolithic architecture.
This means that as it's updated over time, its code base becomes bloated, making it
difficult to change something without breaking something else.
And when an application is updated, the entire application needs to be deployed and
tested, even if the change is only small.
This makes implementing updates a lengthy and potentially risky process.
When building new applications or modernizing existing ones, a microservice architecture
can reduce these problems.
This type of architecture involves the separation of a large application into small, loosely
coupled services.
The code base for each service is modular so it's easy to determine where the code needs
to be changed.
And when a code change is required, the service can be updated and deployed
independently.
In addition, each service can be scaled independently depending on its specific
requirements.
Adopting an automated continuous integration and continuous deployment approach, also
known as CI/CD, can help you increase your application release velocity and reliability.
With a robust CI/CD pipeline, you can test and roll out changes incrementally instead of
making big releases with multiple changes.
This approach enables you to lower the risk of regressions, debug issues quickly, and roll
back to the last stable build if necessary.
It also means you can update applications without interrupting services to your users.
Imagine being able to deliver new features to your customers every day instead of a few
times a year.
Here's the important bit.
Some organizations have been able to adopt CI/CD to build applications faster but not
always with the high quality that customers demand.
This is because they don't invest enough in building quality into the process.
When building an application, you need to consider how quickly your systems can recover
from downtime.
If you're not able to recover from production infrastructure failures quickly, it doesn't matter
how quickly you deliver software, you won't be able to deliver better customer
experiences.
Google Cloud Developer Tools help you release software at a high velocity while
balancing security and quality.
There are two tools we'll look at in this module: Google Kubernetes Engine and App
Engine.
You might remember Google Kubernetes Engine from the last module.
Let's explore how it enables businesses to be more agile in app development.
App Engine
[serene music] person: I mentioned App Engine in the last module.
App Engine is a platform for building scalable Web applications and mobile back ends.
It allows you to concentrate on innovating your applications by managing the application
infrastructure for you.
For example, when you're building an application, App Engine manages the hardware and
networking infrastructure required to run your code so developers no longer need to spend
valuable time doing this.
During deployment, App Engine will scale your application automatically in response to the
amount of traffic it receives so you only pay for the resources you use.
Just upload your code, and Google will manage your app's availability.
You can easily run multiple versions of your app to test new features or designs with end
users.
And because there are no service for you to provision or maintain, the monitoring and
maintenance processes are easier too.
Let's look at an example.
EDP is one of the world's leading utility companies with a presence in countries across
Europe, North and South America, and Asia.
It's an end-to-end operator involved in the generation, distribution, and trading of electricity
and gas.
As a large company responsible for diverse operations, EDP has a complex IT
infrastructure with over 400 applications.
Many of EDP's IT systems were legacy systems not designed to integrate with one
another, leading to inefficient delivery of data.
In particular, EDP was experiencing problems with the performance of its customer
account mobile app, which allows customers to check their usage, account, and payment
details for their electricity and gas accounts.
EDP also needed additional capacity to meet peaks in demand.
To address these issues, EDP rebuilt the app in only two months using App Engine.
The auto-scaling functionality in App Engine means that their new app easily scales to
meet peaks in demand, and customers can now access their data even when EDP's back
end systems are under maintenance.
The new app has delivered significant gains for EDP in terms of both performance and
customer satisfaction.
After EDP migrated its customer service app to App Engine, the average page loading
time decreased by almost 90%, and its App Store reviews ratings jumped from 1.9 to 4.7
in just a couple of weeks, with downloads increasing as a result.
There are many more examples of customers who have leveraged to Google Kubernetes
Engine and App Engine as part of their digital transformation.
Click on the links provided in the reading to learn about three customers in particular who
increased developer velocity and provide amazing customer experiences.
Customer Examples
The following are a few Google Cloud customers who have successfully used Google
Kubernetes Engine (GKE) and App Engine.
Arcules
Arcules is a Canon Company, and Google Cloud partner, that uses IoT technology to
deliver the next generation of cloud-based video surveillance, access control and video
analytics – all in one unified, intuitive platform. Click the link to read about their case
study. (https://cloud.google.com/customers/arcules/ )
Forbes
Forbes Media (Forbes) is a global media, branding, and technology company, with a
focus on news and information about business, investing, technology, entrepreneurship,
leadership, and affluent lifestyles. The Forbes brand reaches more than 120 million
people worldwide through its popular magazine and ForbesLive events, with 40 licensed
local editions in 70 countries. Click here to read their case study.
(https://cloud.google.com/customers/forbes/ )
Zulily
Quiz 2:
1.
A financial services firm wants to migrate an existing application to the cloud but doesn’t
want to risk service downtime. For this reason, they have chosen to opt for redundancy
and build a new application in the cloud while continuing to run their old application on-
premises. Which standard pattern of cloud migration describes this scenario? Select the
correct answer.
Invent in greenfield
Change then move
Invent in brownfield
Move then change
2.
The technology team of a pharmaceutical business decides to adopt an automated
continuous integration and deployment (CI/CD) approach. What is the primary value of
using a CI/CD approach for the overall business? Select the correct answer.
3.
What is Google Kubernetes Engine (GKE)? Select the correct answer.
5.
What is App Engine? Select the correct answer.
This module explains how APIs are a tool for both digital integration and digital
transformation. It begins by defining legacy systems and the specific barriers to the demands
of the digital age. It then defines APIs and explains how they can modernize legacy systems
and create new business value. The module closes with a description of Apigee, an API
management tool, along with customer use cases.
Introduction:
Saman: Welcome to module 3, the Value of APIs. What are they? I'll explain in just a
moment. So far in this course, I've explored an organization's technical foundation in the
cloud. Many businesses have a variety of systems and applications, and for traditional
enterprises, many of those systems and applications were built on-premises
For traditional companies, legacy systems and applications are complex, expensive to
maintain, and do not provide the speed and scale required to deliver seamless, digital
experiences that consumers now expect. When it comes to digital transformation,
companies typically have the following three primary goals-- modernize IT systems such
as compute solutions,
modernize applications so they can remain relevant in today's comm era, and thirdly,
leverage application programming interfaces or APIs to unlock and create value for
customers. So in this module, I'll explore how APIs are a tool for both digital integration
and digital transformation. I'll begin by defining legacy systems and identify why they
struggle
to meet the demands of the digital age. Then I'll define APIs and how the can modernize
legacy systems. Next, I'll explore examples of how APIs create new business value. And
finally, I'll look at Apigee, a Google Cloud solution for developing and managing APIs. Let's
jump in.
Many enterprise decision makers and advocates are constantly choosing between
maintaining legacy systems and developing new and innovative projects.
Why is this often a trade off for businesses?
A legacy system is outdated computing software and or hardware that is still in use.
The legacy system is mission critical, but often not equipped to deliver new services or
upgrades at the speed and scale that users expect.
Worse, a legacy system often cannot connect to new systems.
Common examples of legacy systems include H.R. or employee management systems,
banking systems, databases, data warehouses, data lakes or systems designed for
government operations.
These are all different systems to store and manage data.
All of these systems, whether they're on premises or in a private or multi cloud or hybrid
cloud environment, are valuable to the business because they hold a large amount of
data.
But unlocking the value of that data is challenging.
Why?
I'll tell you first, legacy systems weren't developed to support the implementation and
adoption of modern technologies such as the cloud or the Internet of Things or mobile
applications.
Second, they were developed for a time when data was shared in batches or at specific
time intervals.
This means that legacy systems are not designed to serve real time data, as is expected
in today's digital world.
As a result, legacy systems tend to hold organizations back from using digital technologies
to innovate or improve IT efficiency.
Naturally, modernizing IT infrastructure is central to digital transformation so that
businesses no longer have to choose between maintenance and innovation.
This means they need a well-designed integration strategy that leverages application
programing interfaces or APIs.
In the next video, I'll explain what APIs are and how they can be used to modernize legacy
systems in more detail.
In the previous video, I shared some challenges of legacy systems and mentioned that
APIs are way to solve for those challenges.
But what is an API exactly?
And how can you use it to modernize your infrastructure and API as a piece of software
that connects different applications and enables information to flow between systems?
Ultimately, APIs enable integration between systems so businesses can unlock value and
create new services.
They do this by exposing data in a way that protects the integrity of the legacy systems
and enables secure and govern access to the underlying data.
This allows organizations with older systems to adapt to modern business needs and,
more importantly, to quickly adopt new technologies and platforms.
APIs enable businesses to unlock value without architecting all of those legacy
applications.
But it's important to remember that legacy modernization is not the end goal.
Rather, it's a way to build long term flexibility so an organization can meet evolving needs
and better service customers.
Customers expect real time, seamless experiences across platforms.
Businesses now have the opportunity to digitize experiences throughout their value chain
to meet their customers expectations.
Consider a traditional retail bank.
They have lots of legacy systems that have valuable data.
In the past decade, cloud native startup banks have entered the market and traditional
banks must urgently adapt to a changing market.
As a result, they want to provide customers with a connected digital experience through
mobile banking.
The app needs to show up to date account balance as soon as you open the application.
The data that provides that information is stored in a legacy database to connect that
database to the end user application.
The bank creates an API that allows information to flow between the application and the
legacy database seamlessly and securely.
So how does this process work?
The web or mobile apps are built by internal enterprise developers or by external third
party companies.
APIs are built and managed by the API team.
Within the Enterprise, app developers leverage those APIs to integrate with backend
services and other service endpoints.
APIs are therefore central to any business's digital transformation strategy.
They enable faster innovation.
So organizations can bring new services to market quickly and create new business value.
All explore this in the next video.
[bright music] person: API started as a tool to facilitate access and integration, and they
still serve those functions, but now APIs can do so much more.
When organizations start to think about and build an API first architecture, they can build
new digital ecosystems and create new business value.
A digital ecosystem is a group of interconnected companies and products.
This includes vendors, third party suppliers, customers, and applications, just to name a
few.
A robust, well-connected, and multifaceted digital ecosystem enables businesses to create
and monetize new digital experiences.
The more you know about your customers, the better you're able to offer a truly integrated
end-to-end digital experience.
And the more services you have in your ecosystem to connect to those customers, the
more you'll learn about them.
So APIs are not only a longstanding integration technology, they are the fundamental
building blocks of digital transformation.
Let's look at Walgreens for example, a large organization based in the U.S. with thousands
of brick and mortar stores.
Their business includes a range of products and services from a pharmacy, to photo
printing, to food and drink.
Over the past decade, they embraced a culture of innovation and APIs were a key
accelerator in their digital transformation.
For example, one service they historically provided was photo development from film.
With the advent of digital cameras and then smartphones, consumer needs changed.
People weren't bringing in film to be developed anymore, so Walgreens asked themselves,
"How do we re-engage smartphone users with photo printing?"
Walgreens built the photo printing API.
This provided a photo experience that allows developers who are building smartphone
apps to connect to Walgreens to print out photos.
So instead of only having create, edit, and share buttons for what an end user can do with
their photo, there's also a print button that connects to any Walgreens store.
In this model, Walgreens is partnering with the developer community to serve customers in
a new way.
The developers are a critical part of its digital ecosystem.
Walgreens is therefore earning revenue from the developers as well as the revenue from
the customers who come in to print their photos.
Walgreens treated the API as a product, not just a tool for integration.
That API has created an entirely new revenue stream and enabled the photo printing line
of business to return to being profitable.
But Walgreens didn't stop there.
Almost immediately after it launched the photo printing API, developers in its ecosystem
approached Walgreens with ideas for other types of products they wanted to provide their
customers, such as photobooks, cards, and canvas cards.
By investing in its digital ecosystem and engaging developers, Walgreens was able to
further monetize its API and create new services to meet the demand.
Another business that has embraced an API strategy to create new value is Monex.
Monex is a global technology-based retail financial service provider.
Their mission is to provide investors with the best financial services and liberal access to
capital markets.
They couldn't upend their backend systems quick enough to deploy new services or even
modify existing ones.
So they developed an API to save them time and simplify processes of developing new
products and services.
With the API, they're no longer constrained by their legacy systems.
Instead, they can develop services and smartphone apps more rapidly.
They've unlocked value from their existing backend services using an API and are able to
provide more seamless, digital experiences for their customers.
But that's not all.
They decided to publish their API for everyone in the financial technology business that is
developing new apps.
This is transformed how their partner financial technology firms display customer portfolio
balances in their apps, improving security and performance for their business partners.
It has also placed Monex at the center of the financial technology ecosystem.
While they initially had a team of developers who worked on the API program, they quickly
realized that an on-premises API gateway wouldn't enable them to scale their program at
the pace or at the performance they hoped for with the required built-in security.
So they turned to Apigee, an API management platform.
But hang on.
What is Apigee, and what can it do?
Check out the next video to learn more.
Apigee
[bright music] person: In an earlier video, I presented some of the common challenges that
legacy systems pose for organizations.
And as companies adopt cloud technology to meet their business needs, there's a
widening gap between modern applications and legacy systems.
In this video, I'll begin by providing you an overview of the infrastructure and application
development gaps, then list how Apigee addresses these gaps and finally, highlight a
customer success story.
Legacy systems like CRMs, ERPs, SOAs, databases, data warehouses, and data lakes
provide businesses data but don't provide features and capabilities at the rate of change
demanded by today's users.
Modern applications on the other hand provide connected experiences and can be rapidly
updated to meet user demands.
Applications that provide these connected experiences to end users must be able to do so
securely and at scale.
Developers therefore need to manage the entire application lifecycle, connect to different
backend systems, including the legacy ones, and be able to track and analyze the
interactions between consumers, data, and service producers.
Many businesses started with a small team of developers who were responsible for
creating APIs specifically for modernizing legacy systems and for creating modern
applications.
But as the company's digital ecosystem becomes more complex, the required time and
effort to manage hundreds of APIs securely and at scale becomes costly.
That's where Apigee comes in.
The Apigee platform includes an API Services layer that provides the runtime API gateway
functionality.
The Apigee platform includes developer services.
This means that a developer can access a portal to utilize your APIs for their projects.
They can also register their applications.
Measuring and tracking the performance of APIs is a critical component in API
management.
Let's take a look at a customer that used Apigee as part of their digital transformation
strategy.
Chile's Ministry of Health faced an enormous challenge.
Their healthcare services and medical facilities lacked connectivity and interoperability
between their respective systems.
This meant that healthcare professionals struggled to access comprehensive medical
records and couldn't utilize new technologies to provide better patient care.
Previous integration attempts failed because they were so time consuming and expensive.
So they embraced an API first architecture to support some key programs, including a
national program to join disconnected healthcare centers and a plan to digitize all clinic
and administrative processes.
The Apigee platform has been the accelerator for the entire program, providing visibility
and controls that make APIs easier to manage.
The Ministry of Health was able to reduce costs by sharing information, eliminating delays,
and reducing the duplication of medical tests.
If you'd like to learn more about Chile's Ministry of Health transformation, check out the link
on the screen.
Quiz 3:
1.
What is the function of APIs? Select the correct answer.
They offer hybrid data storage.
They enable rapid autoscaling of data.
They provide real-time analytics.
They enable integration between systems.
2.
How can businesses use APIs to unlock value from their legacy systems? Select the
correct answer.
3.
Michelle wants to manage her team's APIs and provide security policies for identity
verification, authentication, and access control. What Google Cloud solution should she
choose? Select the correct answer.
BigQuery
Cloud Identity
Google Kubernetes Engine
Apigee
4.
What is a critical outcome of API management? Select the correct answer.
5.
Why do legacy systems struggle to meet modern consumer expectations? Select the
correct answer.
They rapidly surpass physical capacity.
They ineffectively process batch data.
They only serve real-time data.
They scale slowly.
Summary
The Infrastructure and Application Modernization with Google Cloud course provides a
great foundation for understanding the disadvantages of an outdated IT infrastructure
and ways cloud technology can be used to modernize applications in a way that suits an
organization’s business objectives and constraints.