22CS907 Unit3 Final
22CS907 Unit3 Final
before proceeding:
This document is confidential and intended solely for the educational purpose of
RMK Group of Educational Institutions. If you have received this document through
email in error, please notify the system manager. This document contains
proprietary information and is intended only to the respective group / learning
community as intended. If you are not the addressee you should not disseminate,
distribute or copy through e-mail. Please notify the sender immediately by e-mail
if you have received this document by mistake and delete this document from your
system. If you are not the intended recipient you are notified that disclosing,
copying, distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
22CS907
CLOUD FOUNDATIONS
Department : CSE
Created by:
Date : 29.12.2024
1. CONTENTS
S. Contents Page No
No.
1 Contents 5
2 Course Objectives 6
3 Pre-Requisites 7
4 Syllabus 8
5 Course outcomes 10
7 Lecture Plan 12
9 Lecture Notes 14
10 Assignments 47
12 Part B Questions 55
13 Online Certifications 56
15 Assessment Schedule 58
• Pre-requisite Chart
22IT202– DATABASE
MANAGEMENT SYSTEMS
4. SYLLABUS
CLOUD FOUNDATIONS L CT P
22CS907
2 30 2
UNIT I INTRODUCTION TO CLOUD 6+6
Cloud Computing - Cloud Versus Traditional Architecture - IaaS, PaaS, and SaaS - Cloud
Architecture - The GCP Console - Understanding projects - Billing in GCP - Install and
configure Cloud SDK - Use Cloud Shell - APIs - Cloud Console Mobile App.
List of Exercise/Experiments:
1. Install and configure cloud SDK.
2. Connect to computing resources hosted on Cloud via Cloud Shell.
UNIT II COMPUTE AND STORAGE 6+6
Compute options in the cloud - Exploring IaaS with Compute Engine - Configuring elastic
apps with autoscaling - Exploring PaaS - Event driven programs - Containerizing and
orchestrating apps - Storage options in the cloud - Structured and unstructured storage in
the cloud - Unstructured storage using Cloud Storage - SQL managed services - NoSQL
managed services.
List of Exercise/Experiments:
1. Create virtual machine instances of various machine types using the Cloud Console and
the command line. Connect an NGINX web server to your virtual machine.
2. Create a small App Engine application that displays a short message.
3. Create, deploy, and test a cloud function using the Cloud Shell command line.
4. Deploy a containerized application.
5. Create a storage bucket, upload objects to it, create folders and subfolders in it, and make
objects publicly accessible using the Cloud command line.
UNIT III APIs AND SECURITY IN THE CLOUD 6+6
The purpose of APIs – API Services - Managed message services - Introduction to security
in the cloud - The shared security model - Encryption options - Authentication and
authorization with Cloud IAM - Identify Best Practices for Authorization using Cloud IAM.
List of Exercise/Experiments:
1. Deploy a sample API with any of the API service.
2. Publish messages with managed message service using the Python client library.
3. Create two users. Assign a role to a second user and remove assigned roles associated
with Cloud IAM. Explore how granting and revoking permissions works from Cloud Project
Owner and Viewer roles.
UNIT IV NETWORKING, AUTOMATION AND MANGAEMENT TOOLS 6+6
Introduction to networking in the cloud - Defining a Virtual Private Cloud - Public and private
IP address basics - Cloud network architecture - Routes and firewall rules in the cloud -
Multiple VPC networks - Building hybrid clouds using VPNs - Different options for load
balancing - Introduction to Infrastructure as Code - Terraform - Monitoring and management
tools.
List of Exercise/Experiments:
1. Create several VPC networks and VM instances and test connectivity across networks.
2. Create two nginx web servers and control external HTTP access to the web servers using
tagged firewall rules.
3. Configure a HTTP Load Balancer with global backends. Stress test the Load Balancer and
denylist the stress test IP.
4. Create two managed instance groups in the same region. Then, configure and test an
Internal Load Balancer with the instances groups as the backends.
5. Monitor a Compute Engine virtual machine (VM) instance with Cloud Monitoring by
creating uptime check, alerting policy, dashboard and chart.
UNIT V BIG DATA AND MACHINE LEARNING SERVICES 6+6
Introduction to big data managed services in the cloud - Leverage big data operations - Build
Extract, Transform, and Load pipelines - Enterprise Data Warehouse Services - Introduction
to machine learning in the cloud - Building bespoke machine learning models with AI Platform
- Pre-trained machine learning APIs.
List of Exercise/Experiments:
1. Create a cluster, run a simple Apache Spark job in the cluster, then modify the number
of workers in the cluster.
2. Create a streaming pipeline using one of the cloud service.
3. Set up your Python development environment, get the relevant SDK for Python, and run
an example pipeline using the Cloud Console.
4. Use cloud-based data preparation tool to manipulate a dataset. Import datasets, correct
mismatched data, transform data, and join data.
5. Utilize a cloud-based data processing and analysis tool for data exploration and use a
machine learning platform to train and deploy a custom TensorFlow Regressor model for
predicting customer lifetime value.
TOTAL: 60 PERIODS
5. COURSE OUTCOME
PSO2
PSO3
PSO1
PO- PO- PO- PO- PO- PO- PO- PO- PO- PO- PO- PO-
1 2 3 4 5 6 7 8 9 10 11 12
CO1 K3 2 1 1 - - - - - - 2 2 2 2 2 2
CO2 K3 3 3 3 - - - - 2 2 2 2 2 2 2 2
CO3 K3 3 3 3 - - 2 - 2 2 2 2 2 2 2 2
CO4 K3 3 3 3 - - - - 2 2 2 2 2 2 2 2
CO5 K3 3 3 3 - - 2 - - 2 2 2 2 2 2 2
Correlation Level:
1. Slight (Low)
2. Moderate (Medium)
3. Substantial (High)
If there is no correlation, put “-“.
7. LECTURE PLAN
Chalk
1 The purpose of APIs 1 CO3 K2 & talk
4.02.2025
5.02.2025 PPT/
2 Cloud Endpoints 1 CO3 K3
Demo
Using Apigee Edge - 6.02.2025
PPT/
3 Managed message 1 CO3 K2
Demo
services
7.02.2025 PPT/
4 Cloud Pub/Sub 1 CO3 K3
Demo
Introduction to 8.02.2025
security in the cloud
5 - The shared security 1 CO3 K2 PPT
model
12.02.2025 PPT/
6 Encryption options 1 CO3 K2
Demo
Authentication and 14.02.2025
authorization with PPT/
7 1 CO3 K3
Cloud IAM Demo
Introduction to APIs:
For example, consider an API offered by a payment processing service. Customers can
enter their card details on the frontend of an application for an ecommerce store. The
payment processor doesn’t require access to the user’s bank account; the API creates a
unique token for this transaction and includes it in the API call to the server. This ensures
a higher level of security against potential hacking threats.
REST APIs:
REST APIs use HTTP requests to perform GET, PUT, POST, and DELETE
operations.
For example, a REST API would use a GET request to retrieve a record, a POST
request to create one, a PUT request to update a record, and a DELETE request
to delete one.
All HTTP methods can be used in API calls. A well-designed REST API is similar to
a website running in a web browser with built-in HTTP functionality.
The state of a resource at any particular instant, or timestamp, is known as the
resource representation.
This information can be delivered to a client in virtually any format including
JavaScript Object Notation (JSON), HTML, XLT, Python, PHP, or plain text.
JSON is popular because it’s readable by both humans and machines—and it is
programming language-agnostic.
Request headers and parameters are also important in REST API calls because
they include important identifier information such as metadata, authorizations,
uniform resource identifiers (URIs), caching, cookies and more.
Request headers and response headers, along with conventional HTTP status
codes, are used within well-designed REST APIs.
One of the main reasons REST APIs work well with the cloud is due to their
stateless nature. State information does not need to be stored or referenced for
the API to run.
An authorization framework like OAuth 2.0 can help limit the privileges of third-
party applications.
Using a timestamp in the HTTP header, an API can also reject any request that
arrives after a certain time period.
Parameter validation and JSON Web Tokens are other ways to ensure that only
authorized clients can access the API.
When deploying and managing APIs on your own, there are several issues to consider.
Interface Definition
Authentication and Authorization
Logging and Monitoring
Management and Scalability
CLOUD ENDPOINTS:
Endpoints is an API management system that helps you secure, monitor, analyze, and
set quotas on your APIs using the same infrastructure Google uses for its own APIs.
Kubernetes Engine, and Compute Engine. Clients include Android, iOS, and Javascript.
To have your API managed by Cloud Endpoints, you have three options, depending on
where your API is hosted and the type of communications protocol your API uses:
Endpoints works with the Extensible Service Proxy (ESP) and the Extensible Service
Proxy V2 (ESPv2) to provide API management.
Endpoints supports version 2 of the OpenAPI Specification —the industry standard
for defining REST APIs.
Cloud Endpoints supports APIs that are described using version 2.0 of the OpenAPI
specification. API can be implemented using any publicly available REST
framework such as Django or Jersey. API can be described in a JSON or YAML file
referred to as an OpenAPI document.
With Endpoints for gRPC, you can use the API management capabilities of Endpoints to
add an API console, monitoring, hosting, tracing, authentication, and more to your gRPC
services. In addition, once you specify special mapping rules, ESP and ESPv2 translate
RESTful JSON over HTTP into gRPC requests. This means that you can deploy a gRPC
server managed by Endpoints and call its API using a gRPC or JSON/HTTP client, giving
you much more flexibility and ease of integration with other systems.
Cloud Endpoints Frameworks
Cloud Endpoints Frameworks is a web framework for the App Engine standard Python 2.7
and Java 8 runtime environments. Cloud Endpoints Frameworks provides the tools and
libraries that allow you to generate REST APIs and client libraries for your application.
Endpoints Frameworks includes a built-in API gateway that provides API management
features that are comparable to the features that ESP provides for Endpoints for OpenAPI
and Endpoints for gRPC.
Endpoints Frameworks intercepts all requests and performs any necessary checks (such
as authentication) before forwarding the request to the API backend. When the backend
responds, Endpoints Frameworks gathers and reports telemetry. Metrics can be viewed
for API on the Endpoints Services page in the Google Cloud console.
Apigee Edge is a platform for developing and managing APIs. By fronting services with a
proxy layer, Edge provides an abstraction or facade for your backend service APIs and
provides security, rate limiting, quotas, analytics, and more.
Apigee services: The APIs that you use to create, manage, and deploy your API
proxies.
Back-end services: Used by your apps to provide runtime access to data for
your API proxies.
Flavors of Apigee:
Apigee: A cloud version hosted by Apigee in which Apigee maintains the environment,
allowing you to concentrate on building your services and defining the APIs to those
services.
Apigee hybrid: A hybrid version consisting of a runtime plane installed on-premises or in
a cloud provider of your choice, and a management plane running in Apigee's cloud. In
this model, API traffic and data are confined within your own enterprise-approved
boundaries.
Make services available through Apigee:
Apigee enables you to provide secure access to your services with a well-defined API that
is consistent across all of your services, regardless of service implementation. A consistent
API:
The following image shows an architecture with Apigee handling the requests from client
apps to your backend services:
Rather than having app developers consume your services directly, they access an API
proxy created on Apigee. The API proxy functions as a mapping of a publicly available
HTTP endpoint to your backend service. By creating an API proxy you let Apigee handle
the security and authorization tasks required to protect your services, as well as to analyze
and monitor those services.
Because app developers make HTTP requests to an API proxy, rather than directly to
your services, developers do not need to know anything about the implementation of
your services. All the developer needs to know is:
API Gateway:
API Gateway enables you to provide secure access to your backend services through a
well-defined REST API that is consistent across all of your services, regardless of the
service implementation. Clients consume your REST APIS to implement standalone apps
for a mobile device or tablet, through apps running in a browser, or through any other
type of app that can make a request to an HTTP endpoint.
MANAGED MESSAGE SERVICES:
PUB/SUB:
Pub/Sub is used for streaming analytics and data integration pipelines to ingest and
distribute data. It's equally effective as a messaging-oriented middleware for service
integration or as a queue to parallelize tasks.
Pub/Sub enables you to create systems of event producers and consumers, called
publishers and subscribers. Publishers communicate with subscribers asynchronously by
broadcasting events, rather than by synchronous remote procedure calls (RPCs).
Publishers send events to the Pub/Sub service, without regard to how or when these
events are to be processed. Pub/Sub then delivers events to all the services that react to
them. In systems communicating through RPCs, publishers must wait for subscribers to
receive the data. However, the asynchronous integration in Pub/Sub increases the
flexibility and robustness of the overall system.
Pub/Sub service: This messaging service is the default choice for most users and
applications. It offers the highest reliability and largest set of integrations, along with
automatic capacity management. Pub/Sub guarantees synchronous replication of all data
to at least two zones and best-effort replication to a third additional zone.
Pub/Sub Lite service: A separate but similar messaging service built for lower cost. It
offers lower reliability compared to Pub/Sub. It offers either zonal or regional topic
storage. Zonal Lite topics are stored in only one zone. Regional Lite topics replicate data
to a second zone asynchronously. Also, Pub/Sub Lite requires you to pre-provision and
manage storage and throughput capacity. Consider Pub/Sub Lite only for applications
where achieving a low cost justifies some additional operational work and lower reliability.
In this scenario, there are two publishers publishing messages on a single topic. There
are two subscriptions to the topic. The first subscription has two subscribers, meaning
messages will be load-balanced across them, with each subscriber receiving a subset
of the messages. The second subscription has one subscriber that will receive all of
the messages. The bold letters represent messages. Message A comes from Publisher
Integrations:
Pub/Sub has many integrations with other Google Cloud products to create a fully
featured messaging system:
APIs. Pub/Sub uses standard gRPC and REST service API technologies along with client
libraries for several languages.
Integration Connectors. (Preview) These connectors let you connect to various data
sources. With connectors, both Google Cloud services and third-party business
applications are exposed to your integrations through a transparent, standard interface.
For Pub/Sub, you can create a Pub/Sub connection for use in your integrations.
Publisher-subscriber relationships can be one-to-many (fan-out), many-to-one (fan-in),
and many-to-many, as shown in the following diagram:
The following diagram illustrates how a message passes from a publisher to a subscriber.
For push delivery, the acknowledgment is implicit in the response to the push request,
while for pull delivery it requires a separate RPC.
Pub/Sub Basic Architecture:
Pub/Sub servers run in all Google Cloud regions around the world. This allows the service
to offer fast, global data access, while giving users control over where messages are
stored. Cloud Pub/Sub offers global data access in that publisher and subscriber clients
are not aware of the location of the servers to which they connect or how those services
route the data.
Pub/Sub’s load balancing mechanisms direct publisher traffic to the nearest Google Cloud
data center where data storage is allowed.
Any individual message is stored in a single region. However, a topic may have messages
stored in many regions. When a subscriber client requests messages published to this
topic, it connects to the nearest server which aggregates data from all messages
published to the topic for delivery to the client.
Pub/Sub is divided into two primary parts: the data plane, which handles moving
messages between publishers and subscribers, and the control plane, which handles
the assignment of publishers and subscribers to servers on the data plane. The servers
in the data plane are called forwarders, and the servers in the control plane are called
routers. When publishers and subscribers are connected to their assigned forwarders,
they do not need any information from the routers (as long as those forwarders remain
accessible). Therefore, it is possible to upgrade the control plane of Pub/Sub without
affecting any clients that are already connected and sending or receiving messages.
Control Plane:
The Pub/Sub control plane distributes clients to forwarders in a way that provides
scalability, availability, and low latency for all clients. Any forwarder is capable of serving
clients for any topic or subscription. When a client connects to Pub/Sub, the router
decides the data centers the client should connect to based on shortest network distance,
a measure of the latency on the connection between two points.
The router provides the client with an ordered list of forwarders it can consider connecting
to. This ordered list may change based on forwarder availability and the shape of the
load from the client.
A client takes this list of forwarders and connects to one or more of them. The client
prefers connecting to the forwarders most recommended by the router, but also takes
into consideration any failures that have occurred
Data Plane:
The data plane receives messages from publishers and sends them to clients.
Different messages for a single topic and subscription can flow through many publishers,
subscribers, publishing forwarders, and subscribing forwarders. Publishers can publish to
multiple forwarders simultaneously and subscribers may connect to multiple subscribing
forwarders to receive messages. Therefore, the flow of messages through connections
among publishers, subscribers, and forwarders can be complex. The following diagram
shows how messages could flow for a single topic and subscription, where different colors
indicate the different paths messages may take from publishers to subscribers:
INTRODUCTION TO SECURITY IN THE CLOUD:
Cloud security refers to a broad set of policies, technologies, applications, and controls
utilized to protect virtualized IP, data, applications, services, and the associated
infrastructure of cloud computing.
The fiver layers of protection Google provides to keep customers' data safe:
1. Hardware infrastructure
2. Service deployment
3. Storage services
4. Internet communication
5. Operational security
At the hardware infrastructure layer:
Hardware design and provenance: Both the server boards and the networking
equipment in Google data centers are custom designed by Google. Google also designs
custom chips, including a hardware security chip that's currently being deployed on both
servers and peripherals.
Secure boot stack: Google server machines use various technologies to ensure that
they are booting the correct software stack, such as cryptographic signatures over the
BIOS, bootloader, kernel, and base operating system image.
Premises security: Google designs and builds its own data centers, which incorporate
multiple layers of physical security protections. Access to these data centers is limited to
only a small fraction of Google employees. Google also hosts some servers in third-party
data centers, where we ensure that there are Google-controlled physical security
measures on top of the security layers provided by the data center operator.
User identity: Google’s central identity service, which usually manifests to end users as
the Google login page, goes beyond asking for a simple username and password. The
service also intelligently challenges users for additional information based on risk factors
such as whether they have logged in from the same device or a similar location in the
past. Users also have the option of employing secondary factors when signing in,
including devices based on the Universal 2nd Factor (U2F) open standard.
Encryption at rest: Most applications at Google access physical storage (in other words,
“file storage”) indirectly by using storage services, and encryption (using centrally
managed keys) is applied at the layer of these storage services. Google also enables
hardware encryption support in hard drives and SSDs.
Denial of Service (DoS) protection: The sheer scale of its infrastructure enables
Google to simply absorb many DoS attacks. Google also has multi-tier, multi-layer DoS
protections that further reduce the risk of any DoS impact on a service running behind a
GFE.
Finally, at Google’s operational security layer:
Intrusion detection: Rules and machine intelligence give Google’s operational security
teams warnings of possible incidents. Google conducts Red Team exercises to measure
and improve the effectiveness of its detection and response mechanisms.
Reducing insider risk: Google aggressively limits and actively monitors the activities of
employees who have been granted administrative access to the infrastructure.
Employee U2F use: To guard against phishing attacks against Google employees,
employee accounts require use of U2F-compatible security keys.
Software development practices: Google employs central source control and requires
two-party review of new code. Google also provides its developers with libraries that
prevent them from introducing certain classes of security bugs. Google also runs a
Vulnerability Rewards Program where we pay anyone who can discover and inform us of
bugs in our infrastructure or applications.
Cloud computing and storage provide users with capabilities to store and process
their data in third-party data centers. Organizations use the cloud in a variety of different
service models (with acronyms such as SaaS, PaaS, and IaaS) and deployment models
(private, public, hybrid, and community).
Security concerns associated with cloud computing are typically categorized in two ways:
as security issues faced by cloud providers (organizations providing software-, platform-
, or infrastructure-as-a-service via the cloud) and security issues faced by their customers
(companies or organizations who host applications or store data on the cloud).
Security responsibilities are shared between the customer and Google Cloud.
The responsibility is shared, however, and is often detailed in a cloud provider's "shared
security responsibility model" or "shared responsibility model." The provider must ensure
that their infrastructure is secure and that their clients’ data and applications are
protected, while the user must take measures to fortify their application and use strong
passwords and authentication measures.
But when they move an application to Google Cloud, Google handles many of the lower
layers of security, like the physical security, disk encryption, and network integrity.
The upper layers of the security stack, including the securing of data, remain the
customer’s responsibility. Google provides tools like the resource hierarchy and IAM to
help them define and implement policies, but ultimately this part is their responsibility.
Data access is usually the customer’s responsibility. They control who or what has access
to their data. Google Cloud provides tools that help them control this access, such as
Identity and Access Management, but they must be properly configured to protect your
data.
Several encryption options are available on Google Cloud. These range from simple but
with limited control, to greater control flexibility but with more complexity.
Google Cloud will encrypt data in transit and at rest by default. Data in transit is encrypted
by using Transport Layer Security (TLS). Data encrypted at rest is done with AES 256-bit
keys. The encryption happens automatically.
With customer-managed encryption keys, you manage your encryption keys that
protect data on Google Cloud.
Cloud Key Management Service, or Cloud KMS, automates and simplifies the
generation and management of encryption keys. The keys are managed by the
Customer-supplied encryption keys give users more control over their keys, but
with greater management complexity.
With CSEK, users use their own AES 256-bit encryption keys. They are responsible
for generating these keys.
Users are responsible for storing the keys and providing them as part of Google
Cloud API calls.
Google Cloud will use the provided key to encrypt the data before saving it. Google
guarantees that the key only exists in-memory and is discarded after use.
Persistent disks, such as those that back virtual machines, can be encrypted with
customer-supplied encryption keys. With CSEK for persistent disks, the data is encrypted
before it leaves the virtual machine. Even without CSEK or CMEK, persistent disks are still
encrypted. When a persistent disk is deleted, the keys are discarded, and the data is
rendered irrecoverable by traditional means.
To have more control over persistent disk encryption, users can create their own
persistent disks and redundantly encrypt them.
And finally, client-side encryption is always an option. With client-side encryption, users
encrypt data before they send it to Google Cloud. Neither the unencrypted data nor the
decryption keys leave their local device.
Google provides many APIs and services, which require authentication to access.
IAM lets you grant granular access to specific Google Cloud resources and helps
prevent access to other resources. IAM lets you adopt the security principle of
least privilege, which states that nobody should have more permissions than they
actually need.
With IAM, you manage access control by defining who (identity) has what access
(role) for which resource. For example, Compute Engine virtual machine instances,
Google Kubernetes Engine (GKE) clusters, and Cloud Storage buckets are all
Google Cloud resources.
The organizations, folders, and projects that you use to organize your resources
are also resources.
In IAM, permission to access a resource isn't granted directly to the end user.
Instead, permissions are grouped into roles, and roles are granted to authenticated
principals.
An allow policy, also known as an IAM policy, defines and enforces what roles are
granted to which principals. Each allow policy is attached to a resource. When an
authenticated principal attempts to access a resource, IAM checks the resource's
Principal. A principal can be a Google Account (for end users), a service account
(for applications and compute workloads), a Google group, or a Google Workspace
account or Cloud Identity domain that can access a resource. Each principal has
its own identifier, which is typically an email address.
Policy. The allow policy is a collection of role bindings that bind one or more
principals to individual roles. When you want to define who (principal) has what
type of access (role) on a resource, you create an allow policy and attach it to the
resource.
Concepts related to identity:
In IAM, you grant access to principals. Principals can be of the following types:
Google Account
Service account
Google group
Google Workspace account
Cloud Identity domain
All authenticated users
All users
When an authenticated principal attempts to access a resource, IAM checks the resource's
allow policy to determine whether the action is allowed.
Resource
If a user needs access to a specific Google Cloud resource, you can grant the user
a role for that resource.
Permissions are not granted to users directly. Instead, the roles that contain the
appropriate permissions are identified, and then the roles are granted to the user.
Roles
A role is a collection of permissions. You cannot grant a permission to the user directly.
Instead, you grant them a role. When you grant a role to a user, you grant them all the
permissions that the role contains.
roles/viewer Viewer Permissions for read-only actions that do not affect state, such
as viewing (but not modifying) existing resources or data.
roles/editor Editor All viewer permissions, plus permissions for actions that modify
state, such as changing existing resources.
Note: The Editor role contains permissions to create and delete
resources for most Google Cloud services. However, it does not
contain permissions to perform all actions for all services.
roles/owner Owner All Editor permissions and permissions for the following actions:
Predefined roles: Predefined roles give granular access to specific Google Cloud
resources. These roles are created and maintained by Google. For example, the
predefined role Pub/Sub Publisher (roles/pubsub.publisher) provides access to
only publish messages to a Pub/Sub topic.
Custom roles: Roles that you create to tailor permissions to the needs of your
organization when predefined roles don't meet your needs. IAM also lets you
create custom IAM roles. Custom roles help you enforce the principle of least
privilege, because they help to ensure that the principals in your organization have
only the permissions that they need.
Service Accounts:
A service account is identified by its email address, which is unique to the account.
Running workloads which are not tied to the lifecycle of a human user.
You can create user-managed service accounts in your project using the IAM API,
the Google Cloud console, or the Google Cloud CLI. You are responsible for
managing and securing these accounts.
service-account-name@project-id.iam.gserviceaccount.com
When you enable or use some Google Cloud services, they create user-managed service
accounts that enable the service to deploy jobs that access other Google Cloud resources.
These accounts are known as default service accounts.
Some Google Cloud services need access to your resources so that they can act
on your behalf. For example, when you use Cloud Run to run a container, the
service needs access to any Pub/Sub topics that can trigger the container.
To meet this need, Google creates and manages service accounts for many Google
Cloud services. These service accounts are known as Google-managed service
accounts. You might see Google-managed service accounts in your project's allow
policy, in audit logs, or on the IAM page in the Google Cloud console.
Google-managed service accounts are not listed in the Service accounts page in
the Google Cloud console.
Use projects to group resources that share the same trust boundary.
Check the policy granted on each resource and ensure to recognize the
inheritance.
Because of inheritance, use the principle of least privilege when you grant roles.
Finally, audit policies by using Cloud Audit Logs and audit the memberships of
groups that are used in policies.
10. ASSIGNMENT
1. How to create pub/sub topics and pub/sub subscription in GCP. (CO3, K3)
2. Publishing a message to a topic. (CO3, K2)
3. Use a pull subscriber to output individual topic messages. (CO3, K2)
4. Create 2 users. Login with the first user and assign a role to a second
user and remove assigned roles associated with Cloud IAM.
(CO3, K2)
Interface Definition
Authentication and Authorization
Logging and Monitoring
Management and Scalability
5. What is Cloud Endpoint?
Endpoints is an API management system that helps you secure, monitor, analyze,
and set quotas on your APIs using the same infrastructure Google uses for its own
APIs.
Cloud Endpoints is a distributed API management system that uses a distributed
Extensible Service Proxy, which is a service proxy that runs in its own Docker
container. It helps to create and maintain the most demanding APIs with low
latency and high performance.
6. What are the cloud endpoints options to manage API?
To have your API managed by Cloud Endpoints, you have three options, depending
on where your API is hosted and the type of communications protocol your API uses:
Apigee services: The APIs that you use to create, manage, and deploy your API
proxies.
Back-end services: Used by your apps to provide runtime access to data for
your API proxies.
endpoint.
1. Hardware infrastructure
2. Service deployment
3. Storage services
4. Internet communication
5. Operational security
18. What is shared security model?
Security concerns associated with cloud computing are typically categorized in two
ways: as security issues faced by cloud providers and security issues faced by their
customers. Security responsibilities are shared between the customer and Google
Cloud. The provider must ensure that their infrastructure is secure and that their
clients’ data and applications are protected, while the user must take measures to
fortify their application and use strong passwords and authentication measures.
Cloud KMS lets you both rotate keys manually and automate the rotation of keys
on a time-based interval.
which states that nobody should have more permissions than they actually need.
With IAM, we can manage access control by defining who (identity) has what
access (role) for which resource.
22. How a role is related to permission?
A role is a collection of permissions. Permissions determine what operations are
allowed on a resource. When you grant a role to a principal, you grant all the
permissions that the role contains.
Custom roles: Roles that you create to tailor permissions to the needs of your
organization when predefined roles don't meet your needs. IAM also lets you
create custom IAM roles.
Because of inheritance, use the principle of least privilege when you grant roles.
Finally, audit policies by using Cloud Audit Logs and audit the memberships of
groups that are used in policies.
1. Explain the purpose of API and list the challenges in deploying and managing the
APIs.
https://onlinecourses.nptel.ac.in/noc20_cs55/preview
https://learndigital.withgoogle.com/digitalgarage/course/gcloud-
computing-foundations
14. REAL TIME APPLICATIONS
As part of the daily business operations on its advertising platform, Twitter serves billions
of ad engagement events, each of which potentially affects hundreds of downstream
aggregate metrics. To enable its advertisers to measure user engagement and track ad
campaign efficiency, Twitter offers a variety of analytics tools, APIs, and dashboards that
can aggregate millions of metrics per second in near-real time.
Twitter Revenue Data Platform engineering team, led by Steve Niemitz, migrated their
on-prem architecture to Google Cloud to boost the reliability and accuracy of Twitter's ad
analytics platform.
Over the past decade, Twitter has developed powerful data transformation pipelines to
handle the load of its ever-growing user base worldwide. The first deployments for those
pipelines were initially all running in Twitter's own data centers.
To accommodate for the projected growth in user engagement over the next few years
and streamline the development of new features, the Twitter Revenue Data Platform
engineering team decided to rethink the architecture and deploy a more flexible and
scalable system in Google Cloud.
Six months after fully transitioning its ad analytics data platform to Google Cloud, Twitter
has already seen huge benefits. Twitter's developers have gained in agility as they can
more easily configure existing data pipelines and build new features much faster. The
real-time data pipeline has also greatly improved its reliability and accuracy, thanks to
Beam's exactly-once semantics and the increased processing speed and ingestion
capacity enabled by Pub/Sub, Dataflow, and Bigtable.
Name of the
S.NO Start Date End Date Portion
Assessment
REFERENCES:
1. https://cloud.google.com/docs
2. https://www.cloudskillsboost.google/course_templates/153
3. https://nptel.ac.in/courses/106105223
17. MINI PROJECT
1. Bucket Storage Using API: (CO3, K3)
Objective:
*Create Cloud Storage REST/JSON API calls in Cloud Shell to create buckets and upload content.
Task:
*Using the API library,Create a JSON File in the Cloud Console
*Authenticate and authorize the Cloud Storage JSON/REST API
*Create a bucket and Upload a file using the Cloud Storage JSON/REST API.
Task:
*Enable Dialogflow API to create agent and Define intents to capture user intentions and map them to
appropriate responses.
*Design flows and pages to structure the conversation flow and user interactions.
*Implement entities and parameters to extract and utilize relevant information from user inputs, and
test Agent.
Task:
*Create a Pub/Sub topic
*Build,Deploy and Test the Lab Report Service
*The Email or SMS Service in Cloud Run
Deploy,Configure,and test pub/sub test report together with the Email or SMS Service
*Test the resiliency of the system.
17. MINI PROJECT
Task:
*Add an OAuth policy to your API proxy to verify tokens,Examine the OAuth proxy
*Generate a new OAuth token
*Test the retail API and Remove the Authorization header
Objective:
*Configure customer-supplied encryption keys (CSEK) for Cloud Storage
Task:
*Configured and Utilized CSEK for Cloud Storage.
*Delete local files from Cloud Storage and verified encryption.
*Rotate your encryption keys without downloading and re-uploading data.
18. CONTENT BEYOND THE SYLLABUS
Cloud security is a collection of procedures and technology designed to address external and
internal threats to business security. Organizations need cloud security as they move toward their
digital transformation strategy and incorporate cloud-based tools and services as part of their
infrastructure.
The terms digital transformation and cloud migration have been used regularly in enterprise settings
over recent years. While both phrases can mean different things to different organizations, each is
driven by a common denominator: the need for change.
As enterprises embrace these concepts and move toward optimizing their operational approach, new
challenges arise when balancing productivity levels and security. While more modern technologies
help organizations advance capabilities outside the confines of on-premises infrastructure,
transitioning primarily to cloud-based environments can have several implications if not done
securely.
Striking the right balance requires an understanding of how modern-day enterprises can benefit from
the use of interconnected cloud technologies while deploying the best cloud security practices.
Learn more about cloud security
Report Cost of a Data Breach
Get insights to better manage the risk of a data breach with the latest Cost of a Data Breach report.
Related content
Register for the X-Force Threat Intelligence Index
The most common and widely adopted cloud computing services are:
IaaS (Infrastructure-as-a-Service): Offers a hybrid approach, which allows organizations to manage
some of their data and applications on-premises. At the same time, it relies on cloud providers to
manage servers, hardware, networking, virtualization and storage needs.
PaaS (Platform-as-a-Service): Gives organizations the ability to streamline their application
development and delivery. It does so by providing a custom application framework that automatically
manages operating systems, software updates, storage and supporting infrastructure in the cloud.
SaaS (Software-as-a-Service): Provides cloud-based software hosted online and typically available on
a subscription basis. Third-party providers manage all potential technical issues, such as data,
middleware, servers and storage. This setup helps minimize IT resource expenditures and streamline
maintenance and support functions.
What types of cloud security solutions are available?
Identity and access management (IAM)
Identity and access management (IAM) tools and services allow enterprises to deploy
policy-driven enforcement protocols for all users attempting to access both on-premises
and cloud-based services. The core functionality of IAM is to create digital identities for all
users so they can be actively monitored and restricted when necessary during all data
interactions.
Data loss prevention (DLP)
Data loss prevention (DLP) services offer a set of tools and services designed to ensure the
security of regulated cloud data. DLP solutions use a combination of remediation alerts,
data encryption and other preventive measures to protect all stored data, whether at rest
or in motion.
Regardless of the preventative measures organizations have in place for their on-premises
and cloud-based infrastructures, data breaches and disruptive outages can still occur.
Enterprises must be able to quickly react to newly discovered vulnerabilities or significant
system outages as soon as possible. Disaster recovery solutions are a staple in cloud
security and provide organizations with the tools, services and protocols necessary to
expedite the recovery of lost data and resume normal business operations.
Cloud infrastructures that remain misconfigured by enterprises or even cloud providers can lead
to several vulnerabilities that significantly increase an organization's attack surface. CSPM
addresses these issues by helping to organize and deploy the core components of cloud
security. These include identity and access management (IAM), regulatory compliance
management, traffic monitoring, threat response, risk mitigation and digital asset management.
Disclaimer:
This document is confidential and intended solely for the educational purpose of RMK Group of
Educational Institutions. If you have received this document through email in error, please notify the
system manager. This document contains proprietary information and is intended only to the
respective group / learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender immediately by e-mail if you
have received this document by mistake and delete this document from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking any action in
reliance on the contents of this information is strictly prohibited.