0% found this document useful (0 votes)
25 views65 pages

22CS907 Unit3 Final

This document is a course outline for 'Cloud Foundations' at RMK Group of Educational Institutions, detailing objectives, prerequisites, syllabus, and outcomes for the course. It covers various aspects of cloud computing including APIs, security, networking, and big data services, along with practical exercises. The document is confidential and intended solely for educational purposes, prohibiting unauthorized dissemination.

Uploaded by

nasohal150
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views65 pages

22CS907 Unit3 Final

This document is a course outline for 'Cloud Foundations' at RMK Group of Educational Institutions, detailing objectives, prerequisites, syllabus, and outcomes for the course. It covers various aspects of cloud computing including APIs, security, networking, and big data services, along with practical exercises. The document is confidential and intended solely for educational purposes, prohibiting unauthorized dissemination.

Uploaded by

nasohal150
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

Please read this disclaimer

before proceeding:
This document is confidential and intended solely for the educational purpose of
RMK Group of Educational Institutions. If you have received this document through
email in error, please notify the system manager. This document contains
proprietary information and is intended only to the respective group / learning
community as intended. If you are not the addressee you should not disseminate,
distribute or copy through e-mail. Please notify the sender immediately by e-mail
if you have received this document by mistake and delete this document from your
system. If you are not the intended recipient you are notified that disclosing,
copying, distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
22CS907

CLOUD FOUNDATIONS

Department : CSE

Batch/Year : 2022-2026 / II Year

Created by:

Dr.D.RAJALAKSHMI, Associate Professor / CSE


Dr G.NIRMALA , Associate Professor / CSE

Date : 29.12.2024
1. CONTENTS

S. Contents Page No
No.
1 Contents 5

2 Course Objectives 6

3 Pre-Requisites 7

4 Syllabus 8

5 Course outcomes 10

6 CO- PO/PSO Mapping 11

7 Lecture Plan 12

8 Activity based learning 13

9 Lecture Notes 14

10 Assignments 47

11 Part A Questions & Answers 48

12 Part B Questions 55

13 Online Certifications 56

14 Real Time Applications 57

15 Assessment Schedule 58

16 Text Books & Reference Books 59

17 Mini Project suggestions 60

18 Contents Beyond the Syllabus 62


2. COURSE OBJECTIVES

 To describe the different ways a user can interact with Cloud.


 To discover the different compute options in Cloud and
implement a variety of structured and unstructured storage
models.
 To confer the different application managed service options in
the cloud and outline how security in the cloud is administered
in Cloud.
 To demonstrate how to build secure networks in the cloud
and identify cloud automation and management tools.
 To determine a variety of managed big data services in the
cloud.
3. PRE REQUISITES

• Pre-requisite Chart

20CS907 – CLOUD FOUNDATIONS

22IT202– DATABASE
MANAGEMENT SYSTEMS
4. SYLLABUS
CLOUD FOUNDATIONS L CT P
22CS907
2 30 2
UNIT I INTRODUCTION TO CLOUD 6+6
Cloud Computing - Cloud Versus Traditional Architecture - IaaS, PaaS, and SaaS - Cloud
Architecture - The GCP Console - Understanding projects - Billing in GCP - Install and
configure Cloud SDK - Use Cloud Shell - APIs - Cloud Console Mobile App.

List of Exercise/Experiments:
1. Install and configure cloud SDK.
2. Connect to computing resources hosted on Cloud via Cloud Shell.
UNIT II COMPUTE AND STORAGE 6+6
Compute options in the cloud - Exploring IaaS with Compute Engine - Configuring elastic
apps with autoscaling - Exploring PaaS - Event driven programs - Containerizing and
orchestrating apps - Storage options in the cloud - Structured and unstructured storage in
the cloud - Unstructured storage using Cloud Storage - SQL managed services - NoSQL
managed services.
List of Exercise/Experiments:
1. Create virtual machine instances of various machine types using the Cloud Console and
the command line. Connect an NGINX web server to your virtual machine.
2. Create a small App Engine application that displays a short message.
3. Create, deploy, and test a cloud function using the Cloud Shell command line.
4. Deploy a containerized application.
5. Create a storage bucket, upload objects to it, create folders and subfolders in it, and make
objects publicly accessible using the Cloud command line.
UNIT III APIs AND SECURITY IN THE CLOUD 6+6
The purpose of APIs – API Services - Managed message services - Introduction to security
in the cloud - The shared security model - Encryption options - Authentication and
authorization with Cloud IAM - Identify Best Practices for Authorization using Cloud IAM.
List of Exercise/Experiments:
1. Deploy a sample API with any of the API service.
2. Publish messages with managed message service using the Python client library.
3. Create two users. Assign a role to a second user and remove assigned roles associated
with Cloud IAM. Explore how granting and revoking permissions works from Cloud Project
Owner and Viewer roles.
UNIT IV NETWORKING, AUTOMATION AND MANGAEMENT TOOLS 6+6
Introduction to networking in the cloud - Defining a Virtual Private Cloud - Public and private
IP address basics - Cloud network architecture - Routes and firewall rules in the cloud -
Multiple VPC networks - Building hybrid clouds using VPNs - Different options for load
balancing - Introduction to Infrastructure as Code - Terraform - Monitoring and management
tools.

List of Exercise/Experiments:
1. Create several VPC networks and VM instances and test connectivity across networks.
2. Create two nginx web servers and control external HTTP access to the web servers using
tagged firewall rules.
3. Configure a HTTP Load Balancer with global backends. Stress test the Load Balancer and
denylist the stress test IP.
4. Create two managed instance groups in the same region. Then, configure and test an
Internal Load Balancer with the instances groups as the backends.
5. Monitor a Compute Engine virtual machine (VM) instance with Cloud Monitoring by
creating uptime check, alerting policy, dashboard and chart.
UNIT V BIG DATA AND MACHINE LEARNING SERVICES 6+6
Introduction to big data managed services in the cloud - Leverage big data operations - Build
Extract, Transform, and Load pipelines - Enterprise Data Warehouse Services - Introduction
to machine learning in the cloud - Building bespoke machine learning models with AI Platform
- Pre-trained machine learning APIs.

List of Exercise/Experiments:
1. Create a cluster, run a simple Apache Spark job in the cluster, then modify the number
of workers in the cluster.
2. Create a streaming pipeline using one of the cloud service.
3. Set up your Python development environment, get the relevant SDK for Python, and run
an example pipeline using the Cloud Console.
4. Use cloud-based data preparation tool to manipulate a dataset. Import datasets, correct
mismatched data, transform data, and join data.
5. Utilize a cloud-based data processing and analysis tool for data exploration and use a
machine learning platform to train and deploy a custom TensorFlow Regressor model for
predicting customer lifetime value.
TOTAL: 60 PERIODS
5. COURSE OUTCOME

At the end of this course, the students will be able to:


CO1: Describe the different ways a user can interact with Cloud.

CO2: Discover the different compute options in Cloud and


implement a variety of structured and unstructured storage
models.

CO3: Discuss the different application managed service options in


the cloud and outline how security in the cloud is administered in
Cloud.

CO4: Demonstrate how to build secure networks in the cloud and


identify cloud automation and management tools.

CO5: Discover a variety of managed big data services in the cloud.


6. CO - PO / PSO MAPPING

PROGRAM OUTCOMES PSO


K3,K
CO HKL K3 K4 K5 K5 A3 A2 A3 A3 A3 A3 A2
4,K5

PSO2
PSO3
PSO1
PO- PO- PO- PO- PO- PO- PO- PO- PO- PO- PO- PO-
1 2 3 4 5 6 7 8 9 10 11 12

CO1 K3 2 1 1 - - - - - - 2 2 2 2 2 2

CO2 K3 3 3 3 - - - - 2 2 2 2 2 2 2 2

CO3 K3 3 3 3 - - 2 - 2 2 2 2 2 2 2 2

CO4 K3 3 3 3 - - - - 2 2 2 2 2 2 2 2

CO5 K3 3 3 3 - - 2 - - 2 2 2 2 2 2 2

Correlation Level:
1. Slight (Low)
2. Moderate (Medium)
3. Substantial (High)
If there is no correlation, put “-“.
7. LECTURE PLAN

Number Proposed Actual Taxonomy Mode of


Sl.
Topic of Date Lecture CO Level Delivery
No.
Periods Date

Chalk
1 The purpose of APIs 1 CO3 K2 & talk
4.02.2025
5.02.2025 PPT/
2 Cloud Endpoints 1 CO3 K3
Demo
Using Apigee Edge - 6.02.2025
PPT/
3 Managed message 1 CO3 K2
Demo
services
7.02.2025 PPT/
4 Cloud Pub/Sub 1 CO3 K3
Demo
Introduction to 8.02.2025
security in the cloud
5 - The shared security 1 CO3 K2 PPT
model
12.02.2025 PPT/
6 Encryption options 1 CO3 K2
Demo
Authentication and 14.02.2025
authorization with PPT/
7 1 CO3 K3
Cloud IAM Demo

Lab: User 15.02.2025


Authentication:
8 Cloud Identity-Aware 1 CO3 K3 Demo
Proxy

Identify Best 18.02.2025


Practices for PPT/
9 Authorization using 1 CO3 K2 Demo
Cloud IAM
8. ACTIVITY BASED LEARNING

Role play on the topic authentication and authorization


with IAM.
9. UNIT III - LECTURE NOTES
APIs AND SECURITY IN THE CLOUD

THE PURPOSE OF APIs:

Introduction to APIs:

 An application programming interface (API) is a way for two or more computer


programs to communicate with each other. It is a type of software interface,
offering a service to other pieces of software.

 A document or standard that describes how to build or use such a connection or


interface is called an API specification.
 APIs are used to simplify the way different, disparate, software resources
communicate.

How an API works:


 An API is a set of defined rules that explain how computers or applications
communicate with one another.
 APIs sit between an application and the web server, acting as an intermediary
layer that processes data transfer between systems.
 A client application initiates an API call to retrieve information—also known as a
request. This request is processed from an application to the web server via the
API’s Uniform Resource Identifier (URI) and includes a request verb, headers, and
sometimes, a request body.
 After receiving a valid request, the API makes a call to the external program or
web server. The server sends a response to the API with the requested
information.

 The API transfers the data to the initial requesting application.


APIs offer security by design because their position as middleman facilitates the
abstraction of functionality between two systems—the API endpoint decouples the
consuming application from the infrastructure providing the service. API calls usually
include authorization credentials to reduce the risk of attacks on the server, and an
API gateway can limit access to minimize security threats. Also, during the exchange,
HTTP headers, cookies, or query string parameters provide additional security layers
to the data.

For example, consider an API offered by a payment processing service. Customers can
enter their card details on the frontend of an application for an ecommerce store. The
payment processor doesn’t require access to the user’s bank account; the API creates a
unique token for this transaction and includes it in the API call to the server. This ensures
a higher level of security against potential hacking threats.

REST APIs:

 REpresentational State Transfer, or REST, is currently the most popular


architectural style for services. 
 It outlines a key set of constraints and agreements that a service must comply 
with. If a service complies with these REST constraints, it’s said to be RESTful.
 APIs intended to be spread widely to consumers and deployed to devices with
limited computing resources, like mobile, are well suited to a REST structure.

 REST APIs use HTTP requests to perform GET, PUT, POST, and DELETE
operations.
 For example, a REST API would use a GET request to retrieve a record, a POST
request to create one, a PUT request to update a record, and a DELETE request
to delete one.
 All HTTP methods can be used in API calls. A well-designed REST API is similar to
a website running in a web browser with built-in HTTP functionality.
 The state of a resource at any particular instant, or timestamp, is known as the
resource representation.
 This information can be delivered to a client in virtually any format including
JavaScript Object Notation (JSON), HTML, XLT, Python, PHP, or plain text.
 JSON is popular because it’s readable by both humans and machines—and it is
programming language-agnostic.
 Request headers and parameters are also important in REST API calls because
they include important identifier information such as metadata, authorizations,
uniform resource identifiers (URIs), caching, cookies and more.

 Request headers and response headers, along with conventional HTTP status
codes, are used within well-designed REST APIs.
 One of the main reasons REST APIs work well with the cloud is due to their
stateless nature. State information does not need to be stored or referenced for
the API to run.
 An authorization framework like OAuth 2.0 can help limit the privileges of third-
party applications. 
 Using a timestamp in the HTTP header, an API can also reject any request that 
arrives after a certain time period.
 Parameter validation and JSON Web Tokens are other ways to ensure that only
authorized clients can access the API.

Challenges of deploying and managing APIs:

When deploying and managing APIs on your own, there are several issues to consider.

 Interface Definition
 Authentication and Authorization
 Logging and Monitoring
 Management and Scalability
CLOUD ENDPOINTS:

Endpoints is an API management system that helps you secure, monitor, analyze, and
set quotas on your APIs using the same infrastructure Google uses for its own APIs.

Cloud Endpoints is a distributed API management system that uses a distributed


Extensible Service Proxy, which is a service proxy that runs in its own Docker container.
It helps to create and maintain the most demanding APIs with low latency and high
performance. After you deploy your API to Endpoints, you can use the Cloud Endpoints
Portal to create a developer portal, a website that users of your API can access to view
documentation and interact with your API. Cloud Endpoints provides an API console,
hosting, logging, monitoring, and other features to help you create, share, maintain, and
secure your APIs. Cloud Endpoints supports applications running in App Engine, Google

Kubernetes Engine, and Compute Engine. Clients include Android, iOS, and Javascript.

The Endpoints options:

To have your API managed by Cloud Endpoints, you have three options, depending on
where your API is hosted and the type of communications protocol your API uses:

 Cloud Endpoints for OpenAPI 


 Cloud Endpoints for gRPC 
 Cloud Endpoints Frameworks for the App Engine standard environment

Cloud Endpoints for OpenAPI

 Endpoints works with the Extensible Service Proxy (ESP) and the Extensible Service
Proxy V2 (ESPv2) to provide API management.
 Endpoints supports version 2 of the OpenAPI Specification —the industry standard
for defining REST APIs.
 Cloud Endpoints supports APIs that are described using version 2.0 of the OpenAPI
specification. API can be implemented using any publicly available REST
framework such as Django or Jersey. API can be described in a JSON or YAML file
referred to as an OpenAPI document.

Extensible Service Proxy:

The Extensible Service Proxy (ESP) is an Nginx-based high-performance, scalable proxy


that runs in front of an OpenAPI or gRPC API backend and provides API management
features such as authentication, monitoring, and logging.

Extensible Service Proxy V2:

The Extensible Service Proxy V2 (ESPv2) is an Envoy-based high-performance, scalable


proxy that runs in front of an OpenAPI or gRPC API backend and provides API
management features such as authentication, monitoring, and logging.

ESPv2 supports version 2 of the OpenAPI Specification and gRPC Specifications.

Cloud Endpoints for gRPC

gRPC is a high performance, open-source universal RPC framework, developed by Google.


In gRPC, a client application can directly call methods on a server application on a
different machine as if it was a local object, making it easier to create distributed
applications and services.

With Endpoints for gRPC, you can use the API management capabilities of Endpoints to
add an API console, monitoring, hosting, tracing, authentication, and more to your gRPC
services. In addition, once you specify special mapping rules, ESP and ESPv2 translate
RESTful JSON over HTTP into gRPC requests. This means that you can deploy a gRPC
server managed by Endpoints and call its API using a gRPC or JSON/HTTP client, giving
you much more flexibility and ease of integration with other systems.
Cloud Endpoints Frameworks
Cloud Endpoints Frameworks is a web framework for the App Engine standard Python 2.7
and Java 8 runtime environments. Cloud Endpoints Frameworks provides the tools and
libraries that allow you to generate REST APIs and client libraries for your application.

Endpoints Frameworks includes a built-in API gateway that provides API management
features that are comparable to the features that ESP provides for Endpoints for OpenAPI
and Endpoints for gRPC.

Endpoints Frameworks intercepts all requests and performs any necessary checks (such
as authentication) before forwarding the request to the API backend. When the backend
responds, Endpoints Frameworks gathers and reports telemetry. Metrics can be viewed
for API on the Endpoints Services page in the Google Cloud console.

Endpoints Frameworks ca be used with or without API management functionality.


APIGEE EDGE:

Apigee Edge is a platform for developing and managing APIs. By fronting services with a
proxy layer, Edge provides an abstraction or facade for your backend service APIs and
provides security, rate limiting, quotas, analytics, and more.

Apigee is an API gateway management framework owned by Google which helps in


exchanging data between different cloud applications and services. Many services and
sites available to the users are delivered through RESTful APIs, API gateways act as a
medium to connect these sites and services with data and feeds, and proper
communication capabilities. In simple words, Apigee is a tool to manage an API gateway
for developing, deploying, and producing user-friendly apps.

High-level architecture of Apigee:


Apigee consists of the following primary components:

 Apigee services: The APIs that you use to create, manage, and deploy your API
proxies.

 Apigee runtime: A set of containerized runtime services in a Kubernetes cluster


that Google maintains. All API traffic passes through and is processed by these
services.

In addition, Apigee uses other components including:

 GCP services: Provides identity management, logging, analytics, metrics, and


project management functions.

 Back-end services: Used by your apps to provide runtime access to data for
your API proxies.

Flavors of Apigee:

Apigee comes in the following flavors:

Apigee: A cloud version hosted by Apigee in which Apigee maintains the environment,
allowing you to concentrate on building your services and defining the APIs to those
services.
Apigee hybrid: A hybrid version consisting of a runtime plane installed on-premises or in
a cloud provider of your choice, and a management plane running in Apigee's cloud. In
this model, API traffic and data are confined within your own enterprise-approved
boundaries.
Make services available through Apigee:

Apigee enables you to provide secure access to your services with a well-defined API that
is consistent across all of your services, regardless of service implementation. A consistent
API:

 Makes it easy for app developers to consume your services.


 Enables you to change the backend service implementation without affecting the
public API.
 Enables you to take advantage of the analytics, developer portal, and other
features built into Apigee.

The following image shows an architecture with Apigee handling the requests from client
apps to your backend services:
Rather than having app developers consume your services directly, they access an API
proxy created on Apigee. The API proxy functions as a mapping of a publicly available
HTTP endpoint to your backend service. By creating an API proxy you let Apigee handle
the security and authorization tasks required to protect your services, as well as to analyze
and monitor those services.

Because app developers make HTTP requests to an API proxy, rather than directly to
your services, developers do not need to know anything about the implementation of
your services. All the developer needs to know is:

 The URL of the API proxy endpoint.


 Any query parameters, headers, or body parameters passed in a request.
 Any required authentication and authorization credentials.
 The format of the response, including the response data format, such as XML or
JSON.
The API proxy isolates the app developer from your backend service. Therefore you are
free to change the service implementation as long as the public API remains consistent.
For example, you can change a database implementation, move your services to a new
host, or make any other changes to the service implementation. By maintaining a
consistent frontend API, existing client apps will continue to work regardless of changes
on the backend.

API Gateway:
API Gateway enables you to provide secure access to your backend services through a
well-defined REST API that is consistent across all of your services, regardless of the
service implementation. Clients consume your REST APIS to implement standalone apps
for a mobile device or tablet, through apps running in a browser, or through any other
type of app that can make a request to an HTTP endpoint.
MANAGED MESSAGE SERVICES:

Messaging services on provide the interconnectivity between components and


applications that are written in different languages and hosted in the same cloud, multiple
clouds or on-premises.

PUB/SUB:

Pub/Sub allows services to communicate asynchronously, with latencies on the order of


100 milliseconds.

Pub/Sub is used for streaming analytics and data integration pipelines to ingest and
distribute data. It's equally effective as a messaging-oriented middleware for service
integration or as a queue to parallelize tasks.

Pub/Sub enables you to create systems of event producers and consumers, called
publishers and subscribers. Publishers communicate with subscribers asynchronously by
broadcasting events, rather than by synchronous remote procedure calls (RPCs).

Publishers send events to the Pub/Sub service, without regard to how or when these
events are to be processed. Pub/Sub then delivers events to all the services that react to
them. In systems communicating through RPCs, publishers must wait for subscribers to
receive the data. However, the asynchronous integration in Pub/Sub increases the
flexibility and robustness of the overall system.

Types of Pub/Sub services:

Pub/Sub consists of two services:

Pub/Sub service: This messaging service is the default choice for most users and
applications. It offers the highest reliability and largest set of integrations, along with
automatic capacity management. Pub/Sub guarantees synchronous replication of all data
to at least two zones and best-effort replication to a third additional zone.

Pub/Sub Lite service: A separate but similar messaging service built for lower cost. It
offers lower reliability compared to Pub/Sub. It offers either zonal or regional topic
storage. Zonal Lite topics are stored in only one zone. Regional Lite topics replicate data
to a second zone asynchronously. Also, Pub/Sub Lite requires you to pre-provision and
manage storage and throughput capacity. Consider Pub/Sub Lite only for applications
where achieving a low cost justifies some additional operational work and lower reliability.

The Basics of a Publish/Subscribe Service:

 Topic. A named resource to which messages are sent by publishers.


 Subscription. A named resource representing the stream of messages from a
single, specific topic, to be delivered to the subscribing application.
 Message. The combination of data and (optional) attributes that a publisher
sends to a topic and is eventually delivered to subscribers.
 Message attribute. A key-value pair that a publisher can define for a message.
For example, key iana.org/language_tag and value en could be added to

messages to mark them as readable by an English-speaking subscriber.


 Publisher. An application that creates and sends messages to a single or multiple
topics.
 Subscriber. An application with a subscription to a single or multiple topics to
receive messages from it.
 Acknowledgment (or "ack"). A signal sent by a subscriber to Pub/Sub after it
has received a message successfully. Acknowledged messages are removed from
the subscription message queue.
 Push and pull. The two message delivery methods. A subscriber receives
messages either by Pub/Sub pushing them to the subscriber chosen endpoint, or
by the subscriber pulling them from the service.

ľhe followingdiagíam shows the basic flow of messages thíough Pub/Sub:

In this scenario, there are two publishers publishing messages on a single topic. There
are two subscriptions to the topic. The first subscription has two subscribers, meaning
messages will be load-balanced across them, with each subscriber receiving a subset
of the messages. The second subscription has one subscriber that will receive all of
the messages. The bold letters represent messages. Message A comes from Publisher

1 and is sent to Subscriber 2 via Subscription 1, and to Subscriber 3 via Subscription


2. Message B comes from Publisher 2 and is sent to Subscriber 1 via Subscription 1
and to Subscriber 3 via Subscription 2.

Integrations:

Pub/Sub has many integrations with other Google Cloud products to create a fully
featured messaging system:

Stream processing and data integration. Supported by Dataflow, including Dataflow


templates and SQL, which allow processing and data integration into BigQuery and data
lakes on Cloud Storage. Dataflow templates for moving data from Pub/Sub to Cloud
Storage, BigQuery, and other products are available in the Pub/Sub and Dataflow UIs in
the Google Cloud console. Integration with Apache Spark, particularly when managed
with Dataproc is also available. Visual composition of integration and processing pipelines
running on Spark + Dataproc can be accomplished with Data Fusion.

Monitoring, Alerting and Logging. Supported by Monitoring and Logging products.

Authentication and IAM. Pub/Sub relies on a standard OAuth authentication used by


other Google Cloud products and supports granular IAM, enabling access control for
individual resources.

APIs. Pub/Sub uses standard gRPC and REST service API technologies along with client
libraries for several languages.

Triggers, notifications, and webhooks. Pub/Sub offers push-based delivery of


messages as HTTP POST requests to webhooks. You can implement workflow automation
using Cloud Functions or other serverless products.

Orchestration. Pub/Sub can be integrated into multistep serverless Workflows


declaratively. Big data and analytic orchestration often done with Cloud Composer, which
supports Pub/Sub triggers. You can also integrate Pub/Sub with Application Integration
(Preview) which is an Integration-Platform-as-a-Service (iPaaS) solution. Application
Integration provides a Pub/Sub trigger to trigger or start integrations.

Integration Connectors. (Preview) These connectors let you connect to various data
sources. With connectors, both Google Cloud services and third-party business
applications are exposed to your integrations through a transparent, standard interface.
For Pub/Sub, you can create a Pub/Sub connection for use in your integrations.
Publisher-subscriber relationships can be one-to-many (fan-out), many-to-one (fan-in),
and many-to-many, as shown in the following diagram:

The following diagram illustrates how a message passes from a publisher to a subscriber.
For push delivery, the acknowledgment is implicit in the response to the push request,
while for pull delivery it requires a separate RPC.
Pub/Sub Basic Architecture:

The system is designed to be horizontally scalable, where an increase in the number of


topics, subscriptions, or messages can be handled by increasing the number of instances
of running servers.

Pub/Sub servers run in all Google Cloud regions around the world. This allows the service
to offer fast, global data access, while giving users control over where messages are
stored. Cloud Pub/Sub offers global data access in that publisher and subscriber clients
are not aware of the location of the servers to which they connect or how those services
route the data.

Pub/Sub’s load balancing mechanisms direct publisher traffic to the nearest Google Cloud
data center where data storage is allowed.
Any individual message is stored in a single region. However, a topic may have messages
stored in many regions. When a subscriber client requests messages published to this
topic, it connects to the nearest server which aggregates data from all messages
published to the topic for delivery to the client.

Pub/Sub is divided into two primary parts: the data plane, which handles moving
messages between publishers and subscribers, and the control plane, which handles
the assignment of publishers and subscribers to servers on the data plane. The servers
in the data plane are called forwarders, and the servers in the control plane are called
routers. When publishers and subscribers are connected to their assigned forwarders,
they do not need any information from the routers (as long as those forwarders remain
accessible). Therefore, it is possible to upgrade the control plane of Pub/Sub without
affecting any clients that are already connected and sending or receiving messages.

Control Plane:

The Pub/Sub control plane distributes clients to forwarders in a way that provides
scalability, availability, and low latency for all clients. Any forwarder is capable of serving
clients for any topic or subscription. When a client connects to Pub/Sub, the router
decides the data centers the client should connect to based on shortest network distance,
a measure of the latency on the connection between two points.

The router provides the client with an ordered list of forwarders it can consider connecting
to. This ordered list may change based on forwarder availability and the shape of the
load from the client.

A client takes this list of forwarders and connects to one or more of them. The client
prefers connecting to the forwarders most recommended by the router, but also takes
into consideration any failures that have occurred
Data Plane:

The data plane receives messages from publishers and sends them to clients.

In general, a message goes through these steps:

1. A publisher sends a message.


2. The message is written to storage.
3. Pub/Sub sends an acknowledgement to the publisher that it has received the
message and guarantees its delivery to all attached subscriptions.
4. At the same time as writing the message to storage, Pub/Sub delivers it to
subscribers.
5. Subscribers send an acknowledgement to Pub/Sub that they have processed the
message.
6. Once at least one subscriber for each subscription has acknowledged the message,
Pub/Sub deletes the message from storage.

Different messages for a single topic and subscription can flow through many publishers,
subscribers, publishing forwarders, and subscribing forwarders. Publishers can publish to
multiple forwarders simultaneously and subscribers may connect to multiple subscribing
forwarders to receive messages. Therefore, the flow of messages through connections
among publishers, subscribers, and forwarders can be complex. The following diagram
shows how messages could flow for a single topic and subscription, where different colors
indicate the different paths messages may take from publishers to subscribers:
INTRODUCTION TO SECURITY IN THE CLOUD:

Cloud security refers to a broad set of policies, technologies, applications, and controls
utilized to protect virtualized IP, data, applications, services, and the associated
infrastructure of cloud computing.

The fiver layers of protection Google provides to keep customers' data safe:

1. Hardware infrastructure
2. Service deployment
3. Storage services
4. Internet communication
5. Operational security
At the hardware infrastructure layer:

Hardware design and provenance: Both the server boards and the networking
equipment in Google data centers are custom designed by Google. Google also designs
custom chips, including a hardware security chip that's currently being deployed on both
servers and peripherals.

Secure boot stack: Google server machines use various technologies to ensure that
they are booting the correct software stack, such as cryptographic signatures over the
BIOS, bootloader, kernel, and base operating system image.

Premises security: Google designs and builds its own data centers, which incorporate
multiple layers of physical security protections. Access to these data centers is limited to
only a small fraction of Google employees. Google also hosts some servers in third-party
data centers, where we ensure that there are Google-controlled physical security
measures on top of the security layers provided by the data center operator.

At the service deployment layer:


Encryption of inter-service communication: Google’s infrastructure provides
cryptographic privacy and integrity for remote procedure call (“RPC”) data on the
network. Google’s services communicate with each other using RPC calls. The
infrastructure automatically encrypts all infrastructure RPC traffic which goes between
data centers. Google has started to deploy hardware cryptographic accelerators that will
allow it to extend this default encryption to all infrastructure RPC traffic inside Google
data centers.

User identity: Google’s central identity service, which usually manifests to end users as
the Google login page, goes beyond asking for a simple username and password. The
service also intelligently challenges users for additional information based on risk factors
such as whether they have logged in from the same device or a similar location in the
past. Users also have the option of employing secondary factors when signing in,
including devices based on the Universal 2nd Factor (U2F) open standard.

At the storage services layer:

Encryption at rest: Most applications at Google access physical storage (in other words,
“file storage”) indirectly by using storage services, and encryption (using centrally
managed keys) is applied at the layer of these storage services. Google also enables
hardware encryption support in hard drives and SSDs.

At the internet communication layer:


Google Front End (GFE): Google services that want to make themselves available on
the internet register themselves with an infrastructure service called the Google Front
End, which ensures that all TLS connections are ended using a public-private key pair
and an X.509 certificate from a Certified Authority (CA), and follows best practices such
as supporting perfect forward secrecy. The GFE also applies protections against Denial of
Service attacks.

Denial of Service (DoS) protection: The sheer scale of its infrastructure enables
Google to simply absorb many DoS attacks. Google also has multi-tier, multi-layer DoS
protections that further reduce the risk of any DoS impact on a service running behind a
GFE.
Finally, at Google’s operational security layer:

Intrusion detection: Rules and machine intelligence give Google’s operational security
teams warnings of possible incidents. Google conducts Red Team exercises to measure
and improve the effectiveness of its detection and response mechanisms.

Reducing insider risk: Google aggressively limits and actively monitors the activities of
employees who have been granted administrative access to the infrastructure.

Employee U2F use: To guard against phishing attacks against Google employees,
employee accounts require use of U2F-compatible security keys.

Software development practices: Google employs central source control and requires
two-party review of new code. Google also provides its developers with libraries that
prevent them from introducing certain classes of security bugs. Google also runs a
Vulnerability Rewards Program where we pay anyone who can discover and inform us of
bugs in our infrastructure or applications.

THE SHARED SECURITY MODEL

Cloud computing and storage provide users with capabilities to store and process
their data in third-party data centers. Organizations use the cloud in a variety of different
service models (with acronyms such as SaaS, PaaS, and IaaS) and deployment models
(private, public, hybrid, and community).

Security concerns associated with cloud computing are typically categorized in two ways:
as security issues faced by cloud providers (organizations providing software-, platform-

, or infrastructure-as-a-service via the cloud) and security issues faced by their customers
(companies or organizations who host applications or store data on the cloud).

Security responsibilities are shared between the customer and Google Cloud.
The responsibility is shared, however, and is often detailed in a cloud provider's "shared
security responsibility model" or "shared responsibility model." The provider must ensure
that their infrastructure is secure and that their clients’ data and applications are
protected, while the user must take measures to fortify their application and use strong
passwords and authentication measures.

When a customer deploys an application to their on-premises infrastructure, they are


responsible for the security of the entire stack: from the physical security of the hardware
and the premises in which they are housed, through to the encryption of the data on
disk, the integrity of the network, all the way up to securing the content stored in those
applications.

But when they move an application to Google Cloud, Google handles many of the lower
layers of security, like the physical security, disk encryption, and network integrity.

The upper layers of the security stack, including the securing of data, remain the
customer’s responsibility. Google provides tools like the resource hierarchy and IAM to
help them define and implement policies, but ultimately this part is their responsibility.

Data access is usually the customer’s responsibility. They control who or what has access
to their data. Google Cloud provides tools that help them control this access, such as
Identity and Access Management, but they must be properly configured to protect your
data.

GOOGLE CLOUD ENCRYPTION OPTIONS

Several encryption options are available on Google Cloud. These range from simple but
with limited control, to greater control flexibility but with more complexity.

The simplest option is Google Cloud default encryption, followed by customer-managed


encryption keys (CMEK), and the option that provides the most control: customer-
supplied encryption keys (CSEK).
A fourth option is to encrypt your data locally before you store it in the cloud. This is
often called client-side encryption.

Google Cloud will encrypt data in transit and at rest by default. Data in transit is encrypted
by using Transport Layer Security (TLS). Data encrypted at rest is done with AES 256-bit
keys. The encryption happens automatically.

Customer-managed encryption keys (CMEK):

 With customer-managed encryption keys, you manage your encryption keys that
protect data on Google Cloud.
 Cloud Key Management Service, or Cloud KMS, automates and simplifies the
generation and management of encryption keys. The keys are managed by the

customer and never leave the cloud.


 Cloud KMS supports encryption, decryption, signing, and verification of data. It
supports both symmetric and asymmetric cryptographic keys and various popular
algorithms.
 Cloud KMS lets you both rotate keys manually and automate the rotation of keys
on a time-based interval.
 Cloud KMS also supports both symmetric keys and asymmetric keys for encryption
and signing.

Customer-supplied encryption keys (CSEK):

 Customer-supplied encryption keys give users more control over their keys, but
with greater management complexity.
 With CSEK, users use their own AES 256-bit encryption keys. They are responsible
for generating these keys.
 Users are responsible for storing the keys and providing them as part of Google
Cloud API calls.
 Google Cloud will use the provided key to encrypt the data before saving it. Google
guarantees that the key only exists in-memory and is discarded after use.

Persistent disks, such as those that back virtual machines, can be encrypted with
customer-supplied encryption keys. With CSEK for persistent disks, the data is encrypted
before it leaves the virtual machine. Even without CSEK or CMEK, persistent disks are still
encrypted. When a persistent disk is deleted, the keys are discarded, and the data is
rendered irrecoverable by traditional means.

Other encryption options:

To have more control over persistent disk encryption, users can create their own
persistent disks and redundantly encrypt them.

And finally, client-side encryption is always an option. With client-side encryption, users
encrypt data before they send it to Google Cloud. Neither the unencrypted data nor the
decryption keys leave their local device.

AUTHENTICATION AND AUTHORIZATION WITH CLOUD IAM

Authentication is the process of determining the identity of the principal attempting to


access a resource.

Authorization is the process of determining whether the principal or application


attempting to access a resource has been authorized for that level of access.

Google provides many APIs and services, which require authentication to access.

Identity and Access Management (IAM):

 IAM lets you grant granular access to specific Google Cloud resources and helps
prevent access to other resources. IAM lets you adopt the security principle of
least privilege, which states that nobody should have more permissions than they
actually need.
 With IAM, you manage access control by defining who (identity) has what access
(role) for which resource. For example, Compute Engine virtual machine instances,
Google Kubernetes Engine (GKE) clusters, and Cloud Storage buckets are all
Google Cloud resources.
 The organizations, folders, and projects that you use to organize your resources
are also resources.
 In IAM, permission to access a resource isn't granted directly to the end user.
Instead, permissions are grouped into roles, and roles are granted to authenticated
principals.
 An allow policy, also known as an IAM policy, defines and enforces what roles are
granted to which principals. Each allow policy is attached to a resource. When an
authenticated principal attempts to access a resource, IAM checks the resource's

allow policy to determine whether the action is permitted.

Access management has three main parts:

 Principal. A principal can be a Google Account (for end users), a service account
(for applications and compute workloads), a Google group, or a Google Workspace
account or Cloud Identity domain that can access a resource. Each principal has
its own identifier, which is typically an email address.

 Role. A role is a collection of permissions. Permissions determine what operations


are allowed on a resource. When you grant a role to a principal, you grant all the
permissions that the role contains.

 Policy. The allow policy is a collection of role bindings that bind one or more
principals to individual roles. When you want to define who (principal) has what
type of access (role) on a resource, you create an allow policy and attach it to the
resource.
Concepts related to identity:

In IAM, you grant access to principals. Principals can be of the following types:

 Google Account
 Service account
 Google group
 Google Workspace account
 Cloud Identity domain
 All authenticated users
 All users

 Google Account - represents a developer, an administrator, or any other person


who interacts with Google Cloud. Any email address that's associated with a Google
Account can be an identity, including gmail.com or other domains.

 Service account - an account that is designed to only be used by a service /


application, not by a regular user.
 Google group - A Google group is a named collection of Google Accounts and
service accounts. Every Google group has a unique email address that's associated
with the group. Google Groups are a convenient way to apply access controls to
a collection of users. You can grant and change access controls for a whole group
at once instead of granting or changing access controls one at a time for individual
users or service accounts.

 Google Workspace account - A Google Workspace account represents a virtual


group of all of the Google Accounts that it contains. Google Workspace accounts
are associated with your organization's internet domain name, such
as example.com. When you create a Google Account for a new user, such
as username@example.com, that Google Account is added to the virtual group for
your Google Workspace account.
 Cloud Identity domain - A Cloud Identity domain is like a Google Workspace
account, because it represents a virtual group of all Google Accounts in an
organization. However, Cloud Identity domain users don't have access to Google
Workspace applications and features.
 All authenticated users - The value allAuthenticatedUsers is a special identifier
that represents all service accounts and all users on the internet who have
authenticated with a Google Account.
 All users - The value allUsers is a special identifier that represents anyone who is
on the internet, including authenticated and unauthenticated users.

Concepts related to access management:

When an authenticated principal attempts to access a resource, IAM checks the resource's
allow policy to determine whether the action is allowed.

Resource
 If a user needs access to a specific Google Cloud resource, you can grant the user
a role for that resource.

 IAM permissions can be granted at the project level.


 The permissions are then inherited by all resources within that project.
Permissions
Permissions determine what operations are allowed on a resource. In the IAM world,
permissions are represented in the form of service.resource.verb, for example,
pubsub. Subscriptions .consume.

Permissions are not granted to users directly. Instead, the roles that contain the
appropriate permissions are identified, and then the roles are granted to the user.
Roles
A role is a collection of permissions. You cannot grant a permission to the user directly.
Instead, you grant them a role. When you grant a role to a user, you grant them all the
permissions that the role contains.

There are several kinds of roles in IAM:


 Basic roles: Basic roles are highly permissive roles that existed prior to the
introduction of IAM. Basic roles can be used to grant principals broad access to
Google Cloud resources. These roles are Owner, Editor, and Viewer.

Name Title Permissions

roles/viewer Viewer Permissions for read-only actions that do not affect state, such
as viewing (but not modifying) existing resources or data.

roles/editor Editor All viewer permissions, plus permissions for actions that modify
state, such as changing existing resources.
Note: The Editor role contains permissions to create and delete
resources for most Google Cloud services. However, it does not
contain permissions to perform all actions for all services.

roles/owner Owner All Editor permissions and permissions for the following actions:

 Manage roles and permissions for a project and all


resources within the project.

 Set up billing for a project.

 Predefined roles: Predefined roles give granular access to specific Google Cloud
resources. These roles are created and maintained by Google. For example, the
predefined role Pub/Sub Publisher (roles/pubsub.publisher) provides access to
only publish messages to a Pub/Sub topic.

Example Compute Engine roles:

Compute Engine roles Permissions

Compute Admin compute.*


(roles/compute.admin) resourcemanager.projects.get
resourcemanager.projects.list
Full control of all Compute Engine serviceusage.quotas.get
resources. serviceusage.services.get
serviceusage.services.list

Compute Image User compute.images.get


(roles/compute.imageUser) compute.images.getFromFamily
compute.images.list
compute.images.useReadOnly
resourcemanager.projects.get
resourcemanager.projects.list
serviceusage.quotas.get
serviceusage.services.get
serviceusage.services.list
Compute Instance Admin (beta) compute.acceleratorTypes.*
compute.addresses.createInternal
(roles/compute.instanceAdmin)
compute.addresses.deleteInternal
compute.addresses.get
compute.addresses.list
compute.disks.create
compute.disks.createSnapshot
compute.disks.delete
compute.disks.get
compute.disks.list
compute.disks.resize
compute.instanceGroupManagers.*
compute.instanceGroups.*
compute.instanceTemplates.*
compute.instances.*
compute.regions.*
compute.zones.*
Compute Load Balancer Admin compute.addresses.*
compute.backendBuckets.*
(roles/compute.loadBalancerAdmin)
compute.backendServices.*
compute.forwardingRules.*
compute.globalAddresses.*
compute.globalForwardingRules.*
compute.globalNetworkEndpointGroups.*
compute.healthChecks.*
compute.httpHealthChecks.*
compute.httpsHealthChecks.*

 Custom roles: Roles that you create to tailor permissions to the needs of your
organization when predefined roles don't meet your needs. IAM also lets you
create custom IAM roles. Custom roles help you enforce the principle of least
privilege, because they help to ensure that the principals in your organization have
only the permissions that they need.

Service Accounts:

A service account is a special type of Google account intended to represent a non-human


user that needs to authenticate and be authorized to access data in Google APIs.
A service account is used by an application or compute workload, such as a Compute
Engine virtual machine (VM) instance, rather than a person. Applications use service
accounts to make authorized API calls, authorized as either the service account itself, or
as Google Workspace or Cloud Identity users through domain-wide delegation.

A service account is identified by its email address, which is unique to the account.

Service accounts are used in scenarios such as:

 Running workloads on virtual machines (VMs).


 Running workloads on on-premises workstations or data centers that call Google
APIs.

 Running workloads which are not tied to the lifecycle of a human user.

Types of service accounts:

User-managed service accounts

 You can create user-managed service accounts in your project using the IAM API,
the Google Cloud console, or the Google Cloud CLI. You are responsible for
managing and securing these accounts.

 By default, you can create up to 100 user-managed service accounts in a project.


 When you create a user-managed service account in your project, you choose a
name for the service account. This name appears in the email address that
identifies the service account, which uses the following format:

service-account-name@project-id.iam.gserviceaccount.com

Default service accounts

When you enable or use some Google Cloud services, they create user-managed service
accounts that enable the service to deploy jobs that access other Google Cloud resources.
These accounts are known as default service accounts.

Google-managed service accounts

 Some Google Cloud services need access to your resources so that they can act
on your behalf. For example, when you use Cloud Run to run a container, the
service needs access to any Pub/Sub topics that can trigger the container.

 To meet this need, Google creates and manages service accounts for many Google
Cloud services. These service accounts are known as Google-managed service
accounts. You might see Google-managed service accounts in your project's allow
policy, in audit logs, or on the IAM page in the Google Cloud console.

 Google-managed service accounts are not listed in the Service accounts page in
the Google Cloud console.

Managing service accounts:


 Service accounts can be thought of as both a resource and as an identity.
 When thinking of the service account as an identity, you can grant a role to a
service account, allowing it to access a resource (such as a project).
 When thinking of a service account as a resource, you can grant roles to other
users to access or manage that service account.

BEST PRACTICES FOR AUTHORIZATION USING CLOUD IAM:

 Use projects to group resources that share the same trust boundary.
 Check the policy granted on each resource and ensure to recognize the
inheritance.

 Because of inheritance, use the principle of least privilege when you grant roles.
 Finally, audit policies by using Cloud Audit Logs and audit the memberships of
groups that are used in policies.
10. ASSIGNMENT

1. How to create pub/sub topics and pub/sub subscription in GCP. (CO3, K3)
2. Publishing a message to a topic. (CO3, K2)
3. Use a pull subscriber to output individual topic messages. (CO3, K2)

4. Create 2 users. Login with the first user and assign a role to a second
user and remove assigned roles associated with Cloud IAM.

(CO3, K2)

5. More specifically, you sign in with 2 different sets of credentials to


experience how granting and revoking permissions works from
Google Cloud Project Owner and Viewer roles. (CO3, K3)
11. PART A QUESTIONS AND ANSWERS
1. Define API.
 An application programming interface (API) is a way for two or more computer
programs to communicate with each other. It is a type of software interface,

offering a service to other pieces of software.


 A document or standard that describes how to build or use such a connection
or interface is called an API specification.
 APIs are used to simplify the way different, disparate, software resources
communicate.

1. How API works?


 A client application initiates an API call to retrieve information—also known as
a request. This request is processed from an application to the web server via
the API’s Uniform Resource Identifier (URI) and includes a request verb,
headers, and sometimes, a request body.
 After receiving a valid request, the API makes a call to the external program or
web server. The server sends a response to the API with the requested
information.

 The API transfers the data to the initial requesting application.


1. What is REST API?
 REpresentational State Transfer, or REST, is currently the most popular
architectural style for services.
 It outlines a key set of constraints and agreements that a service must comply
with. If a service complies with these REST constraints, it’s said to be RESTful.
 APIs intended to be spread widely to consumers and deployed to devices with
limited computing resources, like mobile, are well suited to a REST structure.
 REST APIs use HTTP requests to perform GET, PUT, POST, and DELETE
operations.
4. List the challenges in deploying and managing APIs.
When deploying and managing APIs on your own, there are several issues to consider.

 Interface Definition
 Authentication and Authorization
 Logging and Monitoring
 Management and Scalability
5. What is Cloud Endpoint?
Endpoints is an API management system that helps you secure, monitor, analyze,
and set quotas on your APIs using the same infrastructure Google uses for its own
APIs.
Cloud Endpoints is a distributed API management system that uses a distributed
Extensible Service Proxy, which is a service proxy that runs in its own Docker

container. It helps to create and maintain the most demanding APIs with low
latency and high performance.
6. What are the cloud endpoints options to manage API?
To have your API managed by Cloud Endpoints, you have three options, depending
on where your API is hosted and the type of communications protocol your API uses:

 Cloud Endpoints for OpenAPI


 Cloud Endpoints for gRPC
 Cloud Endpoints Frameworks for the App Engine standard environment
7. What is ESP?
The Extensible Service Proxy (ESP) is an Nginx-based high-performance, scalable
proxy that runs in front of an OpenAPI or gRPC API backend and provides API

management features such as authentication, monitoring, and logging.


8. What is Cloud Endpoint framework?
Cloud Endpoints Frameworks is a web framework for the App Engine
standard Python 2.7 and Java 8 runtime environments. Cloud Endpoints
Frameworks provides the tools and libraries that allow you to generate REST APIs
and client libraries for your application.

9. Define Apigee Edge.


Apigee Edge is a platform for developing and managing APIs. By fronting services
with a proxy layer, Edge provides an abstraction or facade for your backend service
APIs and provides security, rate limiting, quotas, analytics, and more. Apigee is an
API gateway management framework owned by Google which helps in exchanging
data between different cloud applications and services.

10. What are the components of Apigee?

Apigee consists of the following components:

 Apigee services: The APIs that you use to create, manage, and deploy your API
proxies.

 Apigee runtime: A set of containerized runtime services in a Kubernetes cluster


that Google maintains. All API traffic passes through and is processed by these
services.

 GCP services: Provides identity management, logging, analytics, metrics, and


project management functions.

 Back-end services: Used by your apps to provide runtime access to data for
your API proxies.

11. How do you make the services available through Apigee?


Apigee enables you to provide secure access to your services with a well-defined
API that is consistent across all of your services, regardless of service
implementation. A consistent API:

 Makes it easy for app developers to consume your services.


 Enables you to change the backend service implementation without affecting the
public API.
 Enables you to take advantage of the analytics, developer portal, and other
features built into Apigee.

12. What is API Gateway?


API Gateway enables you to provide secure access to your backend services
through a well-defined REST API that is consistent across all of your services,
regardless of the service implementation. Clients consume your REST APIS to
implement standalone apps for a mobile device or tablet, through apps running in
a browser, or through any other type of app that can make a request to an HTTP

endpoint.

13. What is the use of pub/sub?


Pub/Sub allows services to communicate asynchronously, with latencies on the
order of 100 milliseconds. Pub/Sub is used for streaming analytics and data
integration pipelines to ingest and distribute data. Pub/Sub enables you to create
systems of event producers and consumers, called publishers and subscribers.

Publishers communicate with subscribers asynchronously by broadcasting events.


14. What are the types of pub/sub services?
Pub/Sub consists of two services:
Pub/Sub service: This messaging service is the default choice for most users
and applications. It offers the highest reliability and largest set of integrations,
along with automatic capacity management
Pub/Sub Lite service: A separate but similar messaging service built for lower
cost. It offers lower reliability compared to Pub/Sub. It offers either zonal or
regional topic storage.

15. Define topic and subscription.


Topic is a named resource to which messages are sent by publishers.
Subscription is a named resource representing the stream of messages from a
single, specific topic, to be delivered to the subscribing application.

16. Define publisher and subscriber.


Publisher is an application that creates and sends messages to a single or
multiple topics.
Subscriber is an application with a subscription to a single or multiple topics to
receive messages from it.

17. List the five layers of protection provided by Google.


The fiver layers of protection Google provides to keep customers' data safe:

1. Hardware infrastructure
2. Service deployment
3. Storage services
4. Internet communication
5. Operational security
18. What is shared security model?
Security concerns associated with cloud computing are typically categorized in two
ways: as security issues faced by cloud providers and security issues faced by their
customers. Security responsibilities are shared between the customer and Google
Cloud. The provider must ensure that their infrastructure is secure and that their
clients’ data and applications are protected, while the user must take measures to
fortify their application and use strong passwords and authentication measures.

19. What are the encryption options available in google cloud?


Several encryption options are available on Google Cloud. These range from simple
but with limited control, to greater control flexibility but with more complexity. The
simplest option is Google Cloud default encryption, followed by customer-managed
encryption keys (CMEK), and the option that provides the most control: customer-
supplied encryption keys (CSEK).
20. What is the role of KMS?
 Cloud Key Management Service, or Cloud KMS, automates and simplifies the
generation and management of encryption keys. The keys are managed by the
customer and never leave the cloud.
 Cloud KMS supports encryption, decryption, signing, and verification of data. It
supports both symmetric and asymmetric cryptographic keys and various popular
algorithms.

 Cloud KMS lets you both rotate keys manually and automate the rotation of keys
on a time-based interval.

21. Define IAM.


IAM grant granular access to specific Google Cloud resources and helps to prevent
access to other resources. IAM lets to adopt the security principle of least privilege,

which states that nobody should have more permissions than they actually need.
With IAM, we can manage access control by defining who (identity) has what
access (role) for which resource.
22. How a role is related to permission?
A role is a collection of permissions. Permissions determine what operations are
allowed on a resource. When you grant a role to a principal, you grant all the
permissions that the role contains.

23. Define policy in IAM.


The allow policy is a collection of role bindings that bind one or more principals to
individual roles. When you want to define who (principal) has what type of access
(role) on a resource, you create an allow policy and attach it to the resource.

24. What are the types of roles in IAM?


 Basic roles: Basic roles are highly permissive roles that existed prior to the
introduction of IAM. Basic roles can be used to grant principals broad access to
Google Cloud resources. These roles are Owner, Editor, and Viewer.
 Predefined roles: Predefined roles give granular access to specific Google Cloud
resources. These roles are created and maintained by Google.

 Custom roles: Roles that you create to tailor permissions to the needs of your
organization when predefined roles don't meet your needs. IAM also lets you
create custom IAM roles.

25. Define service accounts?


A service account is a special type of Google account intended to represent a non-
human user that needs to authenticate and be authorized to access data in Google
APIs. A service account is used by an application or compute workload, such as a
Compute Engine virtual machine (VM) instance, rather than a person. Applications
use service accounts to make authorized API calls, authorized as either the service
account itself, or as Google Workspace or Cloud Identity users through domain-

wide delegation. A service account is identified by its email address, which is


unique to the account.
26. When service accounts can be used?
Service accounts are used in scenarios such as:

 Running workloads on virtual machines (VMs).


 Running workloads on on-premises workstations or data centers that call Google
APIs.
 Running workloads which are not tied to the lifecycle of a human user.
27. What are the types of service accounts?
 Default service accounts
 User-managed service accounts
 Google-managed service accounts
28. List the best practices for authorization using cloud IAM.
 Use projects to group resources that share the same trust boundary.
 Check the policy granted on each resource and ensure to recognize the
inheritance.

 Because of inheritance, use the principle of least privilege when you grant roles.
 Finally, audit policies by using Cloud Audit Logs and audit the memberships of
groups that are used in policies.

12. PART B QUESTIONS

1. Explain the purpose of API and list the challenges in deploying and managing the
APIs.

2. Explain how Cloud Endpoints are used in API management.


3. Explain in detail about Apigee Edge.
4. What is managed message service. Briefly explain PUB/SUB.
5. How security is implemented in Google cloud?
6. Explain in detail about IAM.
13. ONLINE CERTIFICATIONS

1. Cloud Digital Leader

Cloud Digital Leader | Google Cloud

2. Associate Cloud Engineer:

Associate Cloud Engineer Certification | Google Cloud

3. Google Cloud Computing Foundations Course

https://onlinecourses.nptel.ac.in/noc20_cs55/preview

4. Google Cloud Computing Foundations

https://learndigital.withgoogle.com/digitalgarage/course/gcloud-

computing-foundations
14. REAL TIME APPLICATIONS

Modernizing Twitter's ad engagement analytics platform

As part of the daily business operations on its advertising platform, Twitter serves billions
of ad engagement events, each of which potentially affects hundreds of downstream
aggregate metrics. To enable its advertisers to measure user engagement and track ad
campaign efficiency, Twitter offers a variety of analytics tools, APIs, and dashboards that
can aggregate millions of metrics per second in near-real time.

Twitter Revenue Data Platform engineering team, led by Steve Niemitz, migrated their
on-prem architecture to Google Cloud to boost the reliability and accuracy of Twitter's ad
analytics platform.

Over the past decade, Twitter has developed powerful data transformation pipelines to
handle the load of its ever-growing user base worldwide. The first deployments for those
pipelines were initially all running in Twitter's own data centers.

To accommodate for the projected growth in user engagement over the next few years
and streamline the development of new features, the Twitter Revenue Data Platform
engineering team decided to rethink the architecture and deploy a more flexible and
scalable system in Google Cloud.

Six months after fully transitioning its ad analytics data platform to Google Cloud, Twitter
has already seen huge benefits. Twitter's developers have gained in agility as they can
more easily configure existing data pipelines and build new features much faster. The
real-time data pipeline has also greatly improved its reliability and accuracy, thanks to
Beam's exactly-once semantics and the increased processing speed and ingestion
capacity enabled by Pub/Sub, Dataflow, and Bigtable.

Twitter’s data transformationpipelines for ads | Google Cloud Blog


15. ASSESSMENT SCHEDULE

Tentative schedule for the Assessment During 2024-2025 Even semester

Name of the
S.NO Start Date End Date Portion
Assessment

1 IAT 1 28.01.2025 3.02.2025 UNIT 1 & 2

2 IAT 2 10.03.2025 15.03.2025 UNIT 3 & 4

3 REVISION - - UNIT 5 , 1 & 2

4 MODEL 3.04.2025 17.04.2025 ALL 5 UNITS


16. PRESCRIBED TEXT BOOKS AND REFERENCES

REFERENCES:

1. https://cloud.google.com/docs
2. https://www.cloudskillsboost.google/course_templates/153
3. https://nptel.ac.in/courses/106105223
17. MINI PROJECT
1. Bucket Storage Using API: (CO3, K3)
Objective:
*Create Cloud Storage REST/JSON API calls in Cloud Shell to create buckets and upload content.

Task:
*Using the API library,Create a JSON File in the Cloud Console
*Authenticate and authorize the Cloud Storage JSON/REST API
*Create a bucket and Upload a file using the Cloud Storage JSON/REST API.

2. Bot Building: (CO3, K2)


Objective:
*To build a bot by using a dialogflow api in the CX console.

Task:
*Enable Dialogflow API to create agent and Define intents to capture user intentions and map them to
appropriate responses.
*Design flows and pages to structure the conversation flow and user interactions.
*Implement entities and parameters to extract and utilize relevant information from user inputs, and
test Agent.

3. Pub/Sub Services: (CO3, K2)


Objective:
To Build a Resilient, Asynchronous System with Cloud Run and Pub/Sub based on Pet Theory’s
System.

Task:
*Create a Pub/Sub topic
*Build,Deploy and Test the Lab Report Service
*The Email or SMS Service in Cloud Run
Deploy,Configure,and test pub/sub test report together with the Email or SMS Service
*Test the resiliency of the system.
17. MINI PROJECT

4. Authentication and Authorization (CO3, K3)


ObProtect your API proxy by requiring an OAuth token.
Invoke an OAuth proxy to retrieve a token.
Attach an OAuth token to an API request.

Task:
*Add an OAuth policy to your API proxy to verify tokens,Examine the OAuth proxy
*Generate a new OAuth token
*Test the retail API and Remove the Authorization header

5. Customer-Supplied Encryption Keys with Cloud Storage (CO3, K4)

Objective:
*Configure customer-supplied encryption keys (CSEK) for Cloud Storage
Task:
*Configured and Utilized CSEK for Cloud Storage.
*Delete local files from Cloud Storage and verified encryption.
*Rotate your encryption keys without downloading and re-uploading data.
18. CONTENT BEYOND THE SYLLABUS

Cloud security is a collection of procedures and technology designed to address external and
internal threats to business security. Organizations need cloud security as they move toward their
digital transformation strategy and incorporate cloud-based tools and services as part of their
infrastructure.
The terms digital transformation and cloud migration have been used regularly in enterprise settings
over recent years. While both phrases can mean different things to different organizations, each is
driven by a common denominator: the need for change.

As enterprises embrace these concepts and move toward optimizing their operational approach, new
challenges arise when balancing productivity levels and security. While more modern technologies
help organizations advance capabilities outside the confines of on-premises infrastructure,
transitioning primarily to cloud-based environments can have several implications if not done
securely.

Striking the right balance requires an understanding of how modern-day enterprises can benefit from
the use of interconnected cloud technologies while deploying the best cloud security practices.
Learn more about cloud security
Report Cost of a Data Breach
Get insights to better manage the risk of a data breach with the latest Cost of a Data Breach report.
Related content
Register for the X-Force Threat Intelligence Index

What is cloud computing?


The "cloud" or, more specifically, "cloud computing" refers to the process of accessing resources,
software and databases over the internet and outside the confines of local hardware restrictions. This
technology gives organizations flexibility when scaling their operations by offloading a portion, or
majority, of their infrastructure management to third-party hosting providers.

The most common and widely adopted cloud computing services are:
IaaS (Infrastructure-as-a-Service): Offers a hybrid approach, which allows organizations to manage
some of their data and applications on-premises. At the same time, it relies on cloud providers to
manage servers, hardware, networking, virtualization and storage needs.
PaaS (Platform-as-a-Service): Gives organizations the ability to streamline their application
development and delivery. It does so by providing a custom application framework that automatically
manages operating systems, software updates, storage and supporting infrastructure in the cloud.

SaaS (Software-as-a-Service): Provides cloud-based software hosted online and typically available on
a subscription basis. Third-party providers manage all potential technical issues, such as data,
middleware, servers and storage. This setup helps minimize IT resource expenditures and streamline
maintenance and support functions.
What types of cloud security solutions are available?
Identity and access management (IAM)
Identity and access management (IAM) tools and services allow enterprises to deploy
policy-driven enforcement protocols for all users attempting to access both on-premises
and cloud-based services. The core functionality of IAM is to create digital identities for all
users so they can be actively monitored and restricted when necessary during all data
interactions.
Data loss prevention (DLP)
Data loss prevention (DLP) services offer a set of tools and services designed to ensure the
security of regulated cloud data. DLP solutions use a combination of remediation alerts,
data encryption and other preventive measures to protect all stored data, whether at rest
or in motion.

Security information and event management (SIEM)

Security information and event management (SIEM) provides a comprehensive security


orchestration solution that automates threat monitoring, detection and response in cloud-
based environments. SIEM technology uses artificial intelligence (AI)-driven technologies to
correlate log data across multiple platforms and digital assets. This gives IT teams the
ability to successfully apply their network security protocols, enabling them to quickly react
to any potential threats.

Business continuity and disaster recovery

Regardless of the preventative measures organizations have in place for their on-premises
and cloud-based infrastructures, data breaches and disruptive outages can still occur.
Enterprises must be able to quickly react to newly discovered vulnerabilities or significant
system outages as soon as possible. Disaster recovery solutions are a staple in cloud
security and provide organizations with the tools, services and protocols necessary to
expedite the recovery of lost data and resume normal business operations.

How should you approach cloud security?


The way to approach cloud security is different for every organization and can depend on
several variables. However, the National Institute of Standards and Technology (NIST) has
made a list of best practices that can be followed to establish a secure and sustainable
cloud computing framework.
The NIST has created necessary steps for every organization to self-assess their security
preparedness and apply adequate preventative and recovery security measures to their
systems. These principles are built on the NIST's five pillars of a cybersecurity framework:
Identify, protect, detect, respond and recover.
Another emerging technology in cloud security that supports the execution of NIST's
cybersecurity framework is cloud security posture management (CSPM). CSPM solutions are
designed to address a common flaw in many cloud environments - misconfigurations.

Cloud infrastructures that remain misconfigured by enterprises or even cloud providers can lead
to several vulnerabilities that significantly increase an organization's attack surface. CSPM
addresses these issues by helping to organize and deploy the core components of cloud
security. These include identity and access management (IAM), regulatory compliance
management, traffic monitoring, threat response, risk mitigation and digital asset management.

Related solutionsCloud security solutions


Integrate security into every phase of your cloud journey.
Explore cloud security solutions
Cloud security services
Protect your hybrid cloud environments with cloud security services.
Explore cloud security services
Cloud security strategy services
Work with trusted advisors to guide your cloud security initiatives.
Explore cloud security strategy services
Cloud identity and access management (IAM)
Thank you

Disclaimer:

This document is confidential and intended solely for the educational purpose of RMK Group of
Educational Institutions. If you have received this document through email in error, please notify the
system manager. This document contains proprietary information and is intended only to the
respective group / learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender immediately by e-mail if you
have received this document by mistake and delete this document from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking any action in
reliance on the contents of this information is strictly prohibited.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy