0% found this document useful (0 votes)
47 views79 pages

06 Sample Exam Questions

The document contains sample exam questions and answers about Google Cloud services. The first scenario discusses keeping data in sync across regions, with Cloud Storage being the correct answer since Standard Storage buckets stay synchronized between regions automatically. The second scenario is about migrating an application using websockets to the cloud, with the answer being to do nothing since HTTP(S) load balancing natively supports websocket proxying. The third scenario involves triggering a Cloud Function to process images, with Pub/Sub being necessary to notify the function when new images are uploaded to Cloud Storage. The fourth scenario discusses storage for data accessed monthly but not needed after five years, making Nearline storage with a lifecycle policy to delete the best

Uploaded by

Joel Lim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views79 pages

06 Sample Exam Questions

The document contains sample exam questions and answers about Google Cloud services. The first scenario discusses keeping data in sync across regions, with Cloud Storage being the correct answer since Standard Storage buckets stay synchronized between regions automatically. The second scenario is about migrating an application using websockets to the cloud, with the answer being to do nothing since HTTP(S) load balancing natively supports websocket proxying. The third scenario involves triggering a Cloud Function to process images, with Pub/Sub being necessary to notify the function when new images are uploaded to Cloud Storage. The fourth scenario discusses storage for data accessed monthly but not needed after five years, making Nearline storage with a lifecycle policy to delete the best

Uploaded by

Joel Lim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

Sample Exam Questions

Scenario #1
Question

What service would you use to keep data


Cloud Load
in sync across regions? Balancing

A. Cloud SQL
Web Servers Web Servers

B. Cloud Bigtable Compute Engine Compute Engine

Multiple Instances Multiple Instances

C. Datastore
D. Cloud Storage
Region 1 Region 2

?
Scenario #1
Answer

What service would you use to keep data


Cloud Load
in sync across regions? Balancing

A. Cloud SQL
Web Servers Web Servers

B. Cloud Bigtable Compute Engine Compute Engine

Multiple Instances Multiple Instances

C. Datastore
D. Cloud Storage
Region 1 Region 2

?
Scenario #1
Rationale

D - Cloud Storage Standard Storage


Cloud Load
buckets stay in sync between regions Balancing

automatically.

A, B, C - The other services listed are in Web Servers


Compute Engine
Web Servers
Compute Engine

a single region. Multiple Instances Multiple Instances

Region 1 Region 2

Cloud Storage Standard Storage buckets stays in sync between regions


automatically. The other services listed are in a single region.
Scenario #2
Question

An existing application uses websockets. To help migrate the application to cloud


you should:

A. Redesign the application to use HTTP streaming.


B. Redesign the application to use distributed sessions instead of websockets.
C. Do nothing to the application. HTTP(S) load balancing natively supports
websocket proxying.
D. Review websocket encryption requirements with the security team.
Scenario #2
Answer

An existing application uses websockets. To help migrate the application to cloud


you should:

A. Redesign the application to use HTTP streaming.


B. Redesign the application to use distributed sessions instead of websockets.
C. Do nothing to the application. HTTP(S) load balancing natively supports
websocket proxying.
D. Review websocket encryption requirements with the security team.
Scenario #2
Rationale

C - "HTTP(S) Load Balancing has native support for the WebSocket protocol. Backends that
use WebSocket to communicate with clients can use the HTTP(S) load balancer as a front
end, for scale and availability. The load balancer does not need any additional configuration
to proxy WebSocket connections."

D - Irrelevant to the application migration.

A and B - There is nothing inherent about websockets that requires a redesign to run on
Google Cloud.

https://cloud.google.com/load-balancing/docs/https/#websocket_proxy_support

C - "HTTP(S) Load Balancing has native support for the WebSocket protocol.
Backends that use WebSocket to communicate with clients can use the HTTP(S) load
balancer as a front end, for scale and availability. The load balancer does not need
any additional configuration to proxy WebSocket connections."

D - Irrelevant to the application migration.


A and B - There is nothing inherent about websockets that requires a redesign to run
on Google Cloud.
Scenario #3
Question
Cloud
Which solution is required to trigger Storage
Uploaded
a Cloud Function, so it can ingest Images
and process an image in the image
Cloud Vision
tagging pipeline? Functions API
Image
A. Datastore ? tagging

B. Dataflow Cloud
Storage

C. Pub/Sub Processed
Images
D. Cloud Bigtable
Metadata
Pub/Sub Dataflow

Cloud
SQL
Scenario #3
Answer
Cloud
Which solution is required to trigger Storage
Uploaded
a Cloud Function, so it can ingest Images
and process an image in the image
Cloud Vision
tagging pipeline? Functions API
Image
A. Datastore ? tagging

B. Dataflow Cloud
Storage

C. Pub/Sub Processed
Images
D. Cloud Bigtable
Metadata
Pub/Sub Dataflow

Cloud
SQL
Scenario #3
Rationale

C - Cloud Storage upload events can push Pub/Sub to trigger a Cloud Function to ingest and
process the image.

B - Dataflow would have nothing to do here but receive an image and call a Cloud Function.
A - Datastore is not for storing images.
D - Cloud Bigtable is not for storing images.

C - Cloud Storage upload events can push Pub/Sub to trigger a Cloud Function to
ingest and process the image.
B - Dataflow would have nothing to do here but receive an image and call a Cloud
Function.
A - Datastore is not for storing images.
D - Cloud Bigtable is not for storing images.
Scenario #4
Question

How would you store data to be accessed once a month and not needed after five years.

A. Standard Storage class, lifecycle policy to delete after 5 years.


B. Standard Storage class, lifecycle policy change to Coldline after 5 years.
C. Nearline class, lifecycle policy change to Coldline after 5 years.
D. Nearline class, lifecycle policy to delete after 5 years.
Scenario #4
Answer

How would you store data to be accessed once a month and not needed after five years.

A. Standard Storage class, lifecycle policy to delete after 5 years.


B. Standard Storage class, lifecycle policy change to Coldline after 5 years.
C. Nearline class, lifecycle policy change to Coldline after 5 years.
D. Nearline class, lifecycle policy to delete after 5 years.
Scenario #4
Rationale

D - Access pattern is Nearline. "Not needed" means delete, not archive.

A, B, C - Wrong access pattern or "Coldline" (store) instead of delete.

Access pattern is Nearline. "Not needed" means delete, not archive.


Scenario #5
Question

TerramEarth has a new IoT pipeline. Cloud


Functions
config
Which services will make this
design work?

A. Cloud IoT Core, Datastore data

B. Pub/Sub, Cloud Storage config


1 data 2
Devices
C. Cloud IoT Core, Pub/Sub
config
D. App Engine, Cloud IoT Core
Dataflow BigQuery
Scenario #5
Answer

TerramEarth has a new IoT pipeline. Cloud


Functions
config
Which services will make this
design work?

A. Cloud IoT Core, Datastore data

B. Pub/Sub, Cloud Storage config


1 data 2
Devices
C. Cloud IoT Core, Pub/Sub
config
D. App Engine, Cloud IoT Core
Dataflow BigQuery
Scenario #5
Rationale

C - "Device data captured by Cloud IoT Core gets published to Pub/Sub"

A - Cloud IoT Core does not publish to other services and it doesn't store data.
B - Pub/Sub does not do device management.
D - In theory, an App Engine application could duplicate the functions of Cloud IoT Core, but
since Cloud IoT Core only publishes to Pub/Sub, in position 2, it would not communicate
with either Cloud Functions or Dataflow.

https://cloud.google.com/iot-core/
https://cloud.google.com/solutions/iot/ <- Pub/Sub's role in IoT
C - "Device data captured by Cloud IoT Core gets published to Pub/Sub"

A - Cloud IoT Core does not publish to other services and it doesn't store data.
B - Pub/Sub does not do device management.
D - In theory, an App Engine application could duplicate the functions of Cloud IoT
Core, but since Cloud IoT Core only publishes to Pub/Sub, in position 2, it would not
communicate with either Cloud Functions or Dataflow.
Scenario #6
Question

Which service for a multi-petabyte database for analysts that only know SQL and must be
available 24 x 7?

A. Cloud Storage
B. Cloud SQL
C. BigQuery
D. Datastore
Scenario #6
Answer

Which service for a multi-petabyte database for analysts that only know SQL and must be
available 24 x 7?

A. Cloud Storage
B. Cloud SQL
C. BigQuery
D. Datastore
Scenario #6
Rationale

C - BigQuery SLA is 99.9%, meeting the uptime requirement, and it has an SQL interface.

A - Cloud Storage has no SQL interface.


B - Cloud SQL has the SLA and SQL, but not the capacity.
D - Datastore has no SQL interface.

C - BigQuery SLA is 99.9%, meeting the uptime requirement, and it has an SQL
interface.
A - Cloud Storage has no SQL interface.
B - Cloud SQL has the SLA and SQL, but not the capacity.
D - Datastore has no SQL interface.
Scenario #7
Question

Which service completes the CI/CD Cloud Source


Repositories
pipeline?

A. Pub/Sub
B. Cloud Container Builder
? Container
Registry

C. Cloud Storage
D. Dataproc
Jenkins

Compute Google
Engine Kubernetes
Engine
Scenario #7
Answer

Which service completes the CI/CD Cloud Source


Repositories
pipeline?

A. Pub/Sub
B. Cloud Container Builder
? Container
Registry

C. Cloud Storage
D. Dataproc
Jenkins

Compute Google
Engine Kubernetes
Engine

B - Container Builder
Container Builder builds docker images from source repositories.
None of the other services build docker images.
Scenario #7
Rationale

B - Cloud Build
Cloud Build builds docker images from source repositories.
A, C, D - None of the other services build docker images.
Scenario #8
Question

Simply and reliably clone a Linux VM to another project in another region.

A. Use Linux dd and netcat to stream the root disk to the new VM.
B. Snapshot the root disk and select it for the new VM.
C. Create an image from the root disk with Linux dd, create a disk from the
image, and use it in the new VM.
D. Snapshot the root disk, create an image, and use the image for the new VM
root disk.
Scenario #8
Answer

Simply and reliably clone a Linux VM to another project in another region.

A. Use Linux dd and netcat to stream the root disk to the new VM.
B. Snapshot the root disk and select it for the new VM.
C. Create an image from the root disk with Linux dd, create a disk from the
image, and use it in the new VM.
D. Snapshot the root disk, create an image, and use the image for the new VM
root disk.
Scenario #8
Rationale

D - Will work across project and region, and it is a simple and reliable method.

A - Incurs network costs and impacts performance of the original VM.


B - Snapshots are bound within the region.
C - dd won't work correctly on a mounted disk.

D - Will work across project and region, and it is a simple and reliable method.

A - incurs network costs and impacts performance of the original VM.


B - Snapshots are bound within the region.
C - dd won't work correctly on a mounted disk.
Scenario #9
Question

Helicopter Racing security has locked out SSH access to production VMs. How
can operations manage the VMs?

A. Configure a VPN to allow SSH access to VMs.


B. Develop a Cloud API application for all operations actions.
C. Grant operations team access to use Cloud Shell.
D. Develop an application that grants temporary SSH access.
Scenario #9
Answer

Helicopter Racing security has locked out SSH access to production VMs. How
can operations manage the VMs?

A. Configure a VPN to allow SSH access to VMs.


B. Develop a Cloud API application for all operations actions.
C. Grant operations team access to use Cloud Shell.
D. Develop an application that grants temporary SSH access.
Scenario #9
Rationale

C - The operations team doesn't actually need SSH access to manage VMs. All it needs is Cloud
Shell with the Cloud SDK and gcloud tools. Cloud Shell provides all the tools for managing
Compute Engine instances. In this case the assumption that SSH access is needed is incorrect.
A - A VPN is a way to connect from remote to the internal IP of an instance. If SSH is blocked
everywhere, this work-around won't help.
B - Developing an application that would use the Cloud API would be redundant with the gcloud
command line tool.
D - An application the provides temporary access to SSH is basically just violating the security
practices.

C - The operations team doesn't actually need SSH access to manage VMs. All it
needs is Cloud Shell with the Cloud SDK and gcloud tools.
Cloud Shell provides all the tools for managing Compute Engine instances. In this
case the assumption that SSH access is needed is incorrect.

A - A VPN is a way to connect from remote to the internal IP of an instance. If SSH is


blocked everywhere, this work-around won't help.
B - Developing an application that would use the Cloud API would be redundant with
the gcloud command line tool.
D - An application the provides temporary access to SSH is basically just violating the
security practices.
Scenario #10
Question

What security strategy for PII data on Cloud Storage?

A. Signed URL with expiration.


B. Read-only access to users, and default ACL on bucket.
C. No IAM roles to users, and granular ACLs on bucket.
D. Public access, random names, and share URLs in confidence.
Scenario #10
Answer

What security strategy for PII data on Cloud Storage?

A. Signed URL with expiration.


B. Read-only access to users, and default ACL on bucket.
C. No IAM roles to users, and granular ACLs on bucket.
D. Public access, random names, and share URLs in confidence.
Scenario #10
Rationale

C - Most restrictive access.

A - Signed URL can be leaked.


B - Overly permissive.
D - "Security through obscurity" is no security at all.

C - most restrictive access.

A - Signed URL can be leaked.


B - Overly permissive.
D - "Security through obscurity" is no security at all.
Scenario #11
Question

Which platform features of Google Cloud support TerramEarth's business


requirements?

A. Google has many years of experience with containers.


B. Google Cloud provides automatic discounts with increased usage.
C. AI Platform and BigQuery are designed for petabyte scale.
D. Google Cloud bills per minute, saving costs compared to hourly billing.
Scenario #11
Answer

Which platform features of Google Cloud support TerramEarth's business


requirements?

A. Google has many years of experience with containers.


B. Google Cloud provides automatic discounts with increased usage.
C. AI Platform and BigQuery are designed for petabyte scale.
D. Google Cloud bills per minute, saving costs compared to hourly billing.
Scenario #11
Rationale

C - TerramEarth already has 200TB+ of data and is in a growth phase. Therefore they must
be concerned that the solution will be supportable as they "undergo the next wave of
transformations in our industry". Also, TerramEarth seeks a competitive advantage
through "incremental innovations" which can come from data insights using BigQuery and
AI Platform.

B and D -- TerramEarth is not price sensitive. It is more concerned with facing


competitive threats.
A - Google's years of experience might be a persuasive reason for TerramEarth to choose
Google Cloud, but time with any specific technology is not a stated business requirement.

C - TerramEarth already has 200TB+ of data and is in a growth phase. Therefore they
must be concerned that the solution will be supportable as they "undergo the next
wave of transformations in our industry". Also, TerramEarth seeks a competitive
advantage through "incremental innovations" which can come from data insights
using BigQuery and AI Platform.
B and D -- TerramEarth is not price sensitive. It is more concerned with facing
competitive threats.
A - Google's years of experience might be a persuasive reason for TerramEarth to
choose Google Cloud, but time with any specific technology is not a stated business
requirement.
Scenario #12
Question

How can MountKirk Games meet its scaling requirements while providing
insights to investors?

A. Import MySQL game statistics to BigQuery for provisioning analysis and


indicator reporting.
B. Use Cloud Monitoring custom metrics for autoscaling and reporting.
C. Autoscale based on CPU load and use Google Data Studio to share metrics.
D. Autoscale based on network latency as a measure of user experience.
Scenario #12
Answer

How can MountKirk Games meet its scaling requirements while providing
insights to investors?

A. Import MySQL game statistics to BigQuery for provisioning analysis and


indicator reporting.
B. Use Cloud Monitoring custom metrics for autoscaling and reporting.
C. Autoscale based on CPU load and use Google Data Studio to share metrics.
D. Autoscale based on network latency as a measure of user experience.
Scenario #12
Rationale

B - Cloud Monitoring custom metrics can be crafted to expose specific game activities, which can be useful
for autoscaling and provide a more detailed source of indicators for the targeted marketing investors require.
Cloud Operations is a fully managed service.
Technical Requirements: Game Backend - "Dynamically scale up or down based on game activity." Game
Analytics - "Dynamically scale up or down based on game activity." Game Analytics - "Use only fully managed
services."

A - The current game statistics are not real-time, but loaded into MySQL by ETL, so they cannot be used for
autoscaling. Using BigQuery for analysis may provide better insights, but since game activity is disconnected
from resource provisioning (there is no feedback loop), the marketing insights might not be valid.
C - Google Data Studio might be a way to share metrics with investors so they can explore the data
themselves. That is nice, but it does not satisfy business or technical requirements or solve any practical
problems described in the case. Autoscaling on CPU has a poor correlation to user experience.
D - Network latency is a better measure of user experience for autoscaling than CPU load, but not as good as
game activity.

B - Cloud Monitoring custom metrics can be crafted to expose specific game


activities, which can be useful for autoscaling and provide a more detailed source of
indicators for the targeted marketing investors require. Cloud Operations is a fully
managed service.

Technical Requirements:
Game Backend - "Dynamically scale up or down based on game activity."
Game Analytics - "Dynamically scale up or down based on game activity."
Game Analytics - "Use only fully managed services."

"...they had problems scaling their application servers."


"Mountkirk’s current model is to write game statistics to files and send them through
an ETL tool that loads them into a centralized MySQL database for reporting."
"Our investors want more key performance indicators (KPIs) to evaluate the speed
and stability of the game, as well as other metrics that provide deeper insight into
usage patterns so we can adapt the game to target users."

A - The current game statistics are not real-time, but loaded into MySQL by ETL, so
they cannot be used for autoscaling. Using BigQuery for analysis may provide better
insights, but since game activity is disconnected from resource provisioning (there is
no feedback loop), the marketing insights might not be valid.
C - Google Data Studio might be a way to share metrics with investors so they can
explore the data themselves. That is nice, but it does not satisfy business or technical
requirements or solve any practical problems described in the case. Autoscaling on
CPU has a poor correlation to user experience.
D - Network latency is a better measure of user experience for autoscaling than CPU
load, but not as good as game activity. And it does not provide detailed metrics that
can be used to understand game usage patterns for marketing.
Scenario #13
Question

How to test a risky update to an App Engine application requiring live traffic?

A. Deploy as default temporarily, then roll it back.


B. Create a separate isolated test project and onboard users.
C. Create a second App Engine project, then redirect a subset of users.
D. Deploy a new version, use traffic splitting to test a percentage.
Scenario #13
Answer

How to test a risky update to an App Engine application requiring live traffic?

A. Deploy as default temporarily, then roll it back.


B. Create a separate isolated test project and onboard users.
C. Create a second App Engine project, then redirect a subset of users.
D. Deploy a new version, use traffic splitting to test a percentage.
Scenario #13
Rationale

D - Deploying a new version, but not as default, is easily reversed. Traffic splitting enables
testing with some live traffic, meeting the requirement.

A - Deploying as default moves all traffic to it.


B - Possible, but requires data synchronization and separate traffic splitting. So this is a
complicated approach.
C - App Engine services are intended for hosting different service logic. Using different
services would require manual configuration of the consumers of services to be aware of
the deployment process and manage from the consumer side who is accessing which
service. A complicated approach.

D - Deploying a new version, but not as default, is easily reversed. Traffic splitting
enables testing with some live traffic, meeting the requirement.
A - Deploying as default moves all traffic to it.
B - Possible, but requires data synchronization and separate traffic splitting. So this is
a complicated approach.
C - App Engine services are intended for hosting different service logic. Using
different services would require manual configuration of the consumers of services to
be aware of the deployment process and manage from the consumer side who is
accessing which service. A complicated approach.
Scenario #14
Question
Cloud Load
Balancing
How do you automatically and simultaneously
deploy new code to each cluster?

A. Use an automation tool, such as Jenkins.


B. Change the clusters to activate federated
mode.
Web Cluster Web Cluster
Google Google
C. Use Parallel SSH with Cloud Shell and Kubernetes Engine Kubernetes Engine

kubectl. Container Container

D. Use Cloud Build to publish the new Region 1 Region 2


images.
Scenario #14
Answer
Cloud Load
Balancing
How do you automatically and simultaneously
deploy new code to each cluster?

A. Use an automation tool, such as Jenkins.


B. Change the clusters to activate federated
mode.
Web Cluster Web Cluster
Google Google
C. Use Parallel SSH with Cloud Shell and Kubernetes Engine Kubernetes Engine

kubectl. Container Container

D. Use Cloud Build to publish the new Region 1 Region 2


images.
Scenario #14
Rationale
Cloud Load
Balancing
A - Jenkins handles automation and
simultaneous deployment.

B - Federated mode handles simultaneous,


but not automation.
C - Could work, but over-complicated, and
Web Cluster Web Cluster
will not scale well. Google Google
Kubernetes Engine Kubernetes Engine
D - Cloud Build publishes to Container Container Container

Registry, not to clusters.


Region 1 Region 2

A - Jenkins handles automation and simultaneous deployment.


B - Federated mode handles simultaneous, but not automation.
C - Could work, but over-complicated, and will not scale well.
D - Cloud Build publishes to Container Registry, not to clusters.
Scenario #15
Question

A microservice has intermittent problems that bursts logs. How can you trap it for
live debugging?

A. Log into machine with microservice and wait for the log messages.
B. Look for error in Error Reporting dashboard.
C. Configure microservice to send traces to Cloud Trace.
D. Set a log metric in Cloud Logging, alert on it past a threshold.
Scenario #15
Answer

A microservice has intermittent problems that bursts logs. How can you trap it for
live debugging?

A. Log into machine with microservice and wait for the log messages.
B. Look for error in Error Reporting dashboard.
C. Configure microservice to send traces to Cloud Trace.
D. Set a log metric in Cloud Logging, alert on it past a threshold.
Scenario #15
Rationale

D - A Cloud Logging metric can identify a burst of log lines. You can set an alert. Then
connect to the machine while the problem is happening.

A - Chances of catching it on one machine is low.


B - Error reporting won’t necessarily catch the log lines unless they are stack traces in the
proper format. Additionally just because there is a pattern doesn’t mean you will know
exactly when and where to log in to debug.
C - Trace may tell you where time is being spent but wont let you hone in on the exact host
that the problem is occurring on because you generally only send samples of traces.
There is also no alerting on traces to notify exactly when the problem is happening.

D - A Cloud Logging metric can identify a burst of log lines. You can set an alert. Then
connect to the machine while the problem is happening.
A - Chances of catching it on one machine is low.
B - Error reporting won’t necessarily catch the log lines unless they are stack traces
in the proper format. Additionally just because there is a pattern doesn’t mean you will
know exactly when and where to log in to debug
C - Trace may tell you where time is being spent but wont let you hone in on the exact
host that the problem is occurring on because you generally only send samples of
traces. There is also no alerting on traces to notify exactly when the problem is
happening.
Scenario #16
Question

A company wants penetration security testing that primarily matches an end user
perspective. What action would you take?

A. Notify Google you are going to run a penetration test.


B. Deploy scanners in the cloud and test from there.
C. Use on prem scanners over VPN.
D. Use on prem scanners over public Internet.
Scenario #16
Answer

A company wants penetration security testing that primarily matches an end user
perspective. What action would you take?

A. Notify Google you are going to run a penetration test.


B. Deploy scanners in the cloud and test from there.
C. Use on prem scanners over VPN.
D. Use on prem scanners over public Internet.
Scenario #16
Rationale

D - On prem scanners will approach from outside, and over the public internet is where the
users are.

A - Google doesn't require notification for this.


B - Scanners in the cloud wouldn't meet the "end user perspective".
C - VPN wouldn't match "end user perspective".

D - on prem scanners will approach from outside, and over the public internet is
where the users are.
A - Google doesn't require notification for this.
B - Scanners in the cloud wouldn't meet the "end user perspective"
C - VPN wouldn't match "end user perspective"
Scenario #17
Question

A sales company runs weekly resiliency tests of the current build in a separate
environment by replaying the last holiday sales load. What can improve resiliency?

A. Apply twice the load to the test.


B. Run the resiliency tests daily instead of weekly.
C. Use preemptible instances.
D. Develop a script that mimics a zone outage and add it to the test.
Scenario #17
Answer

A sales company runs weekly resiliency tests of the current build in a separate
environment by replaying the last holiday sales load. What can improve resiliency?

A. Apply twice the load to the test.


B. Run the resiliency tests daily instead of weekly.
C. Use preemptible instances.
D. Develop a script that mimics a zone outage and add it to the test.
Scenario #17
Rationale

D - the goal is resiliency -- to see that the application continues to run and "bounces back"
after the outage is over. Simulating a zone outage is one way to ensure that the
application can really handle the loss of a zone.

A - Applying twice the load doesn't necessarily prove resiliency. That would be to test
scale, which might be useful for future growth planning.
B - It is not clear why running the same tests more frequently would help with resilience. It
might surface issues a few days earlier but at 7x the cost is it worthwhile?
C - Preemptible instances would reduce the cost of the test, but it doesn't prove that the
application is resilient.

D - the goal is resiliency -- to see that the application continues to run and "bounces
back" after the outage is over. Simulating a zone outage is one way to ensure that the
application can really handle the loss of a zone.

A - Applying twice the load doesn't necessarily prove resiliency. That would be to test
scale, which might be useful for future growth planning.
B - It is not clear why running the same tests more frequently would help with
resilience. It might surface issues a few days earlier but at 7x the cost is it worthwhile?
C - Preemptible instances would reduce the cost of the test, but it doesn't prove that
the application is resilient.
Scenario #18
Question

Release failures keep causing rollbacks in a web application. Fixes to the QA


process reduced rollbacks by 80%. What additional steps can you take?

A. Replace the platform’s relational database systems with a NoSQL database.


B. Fragment the monolithic platform into microservices.
C. Remove the QA environment. Start executing canary releases.
D. Remove the platform’s dependency on relational database systems.
Scenario #18
Answer

Release failures keep causing rollbacks in a web application. Fixes to the QA


process reduced rollbacks by 80%. What additional steps can you take?

A. Replace the platform’s relational database systems with a NoSQL database.


B. Fragment the monolithic platform into microservices.
C. Remove the QA environment. Start executing canary releases.
D. Remove the platform’s dependency on relational database systems.
Scenario #18
Rationale

B - Smaller functional units means smaller releases with less "surface area" for
problems to occur. More incremental rollouts. Fewer rollbacks.

C - Canary doesn't replace QA. It should be added. Plus, QA is proven to work.


A - NoSQL database offers no quality advantage over relational databases.
D - There is nothing inherent in a relational database that makes it impact the quality
of releases.

https://www.testingexcellence.com/difference-between-greenblue-deployments-ab-tes
ting-and-canary-releases/

B - Smaller functional units means smaller releases with less "surface area" for
problems to occur. More incremental rollouts. Fewer rollbacks.
C - Canary doesn't replace QA. It should be added. Plus, QA is proven to work.
A - NoSQL database offers no quality advantage over relational databases.
D - There is nothing inherent in a relational database that makes it impact the quality
of releases.
Additional Questions
Scenario #19
Question

How will the application parts developed by separate project teams communicate
over RFC1918 addresses?

A. Single project, same VPC


B. Shared VPC, each project a service of the Shared VPC project
C. Parts communicate using HTTPS
D. Communicate over global load balancers, one per project
Scenario #19
Answer

How will the application parts developed by separate project teams communicate
over RFC1918 addresses?

A. Single project, same VPC


B. Shared VPC, each project a service of the Shared VPC project
C. Parts communicate using HTTPS
D. Communicate over global load balancers, one per project
Scenario #19
Rationale

B - Each team has their own project but communicates securely over a single
RFC1918 address space.

A - No separation.
C - Doesn't specify separate projects, therefore doesn't meet business requirements.
D - External IPs do not conform to address technical requirements.

B - each team has their own project but communicates securely over a single
RFC1918 address space.
A - no separation.
C - Doesn't specify separate projects, therefore doesn't meet business requirements.
D - external IPs do not conform to address technical requirements.
Scenario #20
Question

How can you minimize the cost of storing security video files that are processed
repeatedly for 30 days?

A. Standard Storage, then move to Coldline Storage or Archive Storage after 30 days.
B. Nearline Storage, then move to Coldline Storage after 30 days.
C. Standard Storage, then move to Nearline Storage after 30 days.
D. Keep the files in Standard Storage.
Scenario #20
Answer

How can you minimize the cost of storing security video files that are processed
repeatedly for 30 days?

A. Standard Storage, then move to Coldline Storage or Archive Storage after 30 days.
B. Nearline Storage, then move to Coldline Storage after 30 days.
C. Standard Storage, then move to Nearline Storage after 30 days.
D. Keep the files in Standard Storage.
Scenario #20
Rationale

A - Standard Storage for lowest access costs over the 30 days, then Coldline Storage or
Archive Storage because it is unlikely to be read after the 30 days.

B - Using Nearline Storage over the 30 days won't be cost effective because the data is
accessed too frequently. There is also a 30 day minimum storage duration.

C - Moving from Standard Storage to Nearline Storage after the 30 days isn’t as cost
effective as Coldline Storage or Archive Storage if the data is not going to be accessed that
frequently.

D - Keeping the data in Standard Storage is the least cost effective option if it is not going to
be accessed frequently after 30 days.
Scenario #21
Question

A company’s security team has decided to standardize on AES256 for storage device
encryption. Which strategy should be used with Compute Engine instances?

A. Select SSDs rather than HDDs to ensure AES256 encryption.


B. Use the linux dm-crypt tool for whole-disk encryption.
C. Use Customer Supplied Encryption Keys (CSEK).
D. Use openSSL for AES256 file encryption.
Scenario #21
Answer

A company’s security team has decided to standardize on AES256 for storage device
encryption. Which strategy should be used with Compute Engine instances?

A. Select SSDs rather than HDDs to ensure AES256 encryption.


B. Use the linux dm-crypt tool for whole-disk encryption.
C. Use Customer Supplied Encryption Keys (CSEK).
D. Use openSSL for AES256 file encryption.
Scenario #21
Rationale

A - Selection of disk type determines the default method for whole-disk encryption. HDDs
use AES128 and SDDs use AES256.

B - This would be redundant with Compute Engine disk encryption.


C - Who manages the keys has nothing to do with whether it is AES128 or AES256.
D - File encryption is a different layer. The standard is for device encryption.

A - Selection of disk type determines the default method for whole-disk encryption.
HDDs use AES128 and SDDs use AES256.
B - This would be redundant with Compute Engine disk encryption.
C - Who manages the keys has nothing to do with whether it is AES128 or AES256.
D - File encryption is a different layer. The standard is for device encryption.

https://cloud.google.com/compute/docs/disks/customer-supplied-encryption
https://cloud.google.com/security/encryption-at-rest/default-encryption/

"In addition to the storage system level encryption described above, in most cases
data is also encrypted at the storage device level, with at least AES128 for hard disks
(HDD) and AES256 for new solid state drives (SSD), using a separate device-level
key (which is different than the key used to encrypt the data at the storage level). As
older devices are replaced, solely AES256 will be used for device-level encryption."
Scenario #22
Question

Which Cloud IAM roles would you assign for security auditors requiring visibility
across all projects?

A. Org viewer, project owner


B. Org viewer, project viewer
C. Org admin, project browser
D. Project owner, network admin
Scenario #22
Answer

Which Cloud IAM roles would you assign for security auditors requiring visibility
across all projects?

A. Org viewer, project owner


B. Org viewer, project viewer
C. Org admin, project browser
D. Project owner, network admin
Scenario #22
Rationale

B - Gives read-only access across the company.

A, C, D = The other options allow them to make changes.

B - Gives read-only access across the company.


The other options allow them to make changes.
Scenario #23
Question

A car reservation system has long-running transactions. Which one of the


following deployment methods should be avoided?

A. Execute canary releases.


B. Perform A/B testing prior to release.
C. Introduce a blue-green deployment model.
D. Introduce a pipeline deployment model.
Scenario #23
Answer

A car reservation system has long-running transactions. Which one of the


following deployment methods should be avoided?

A. Execute canary releases.


B. Perform A/B testing prior to release.
C. Introduce a blue-green deployment model.
D. Introduce a pipeline deployment model.
Scenario #23
Rationale

C - Switching the load balancer from pointing at the green "good" environment to the blue
"new" environment is a fast way to rollback if there is a problem during release. However,
long-running transactions will be disrupted by that switch.

A - Testing the application with a few users before releasing to everyone will detect
problems early and confine their impact.
B - Performing testing of features "A" with the feature, "B" without the feature, will detect
problems before release.
D - Pipeline deployment - introducing orderly procedures into the QA process can improve
the effectiveness of QA.

https://www.testingexcellence.com/difference-between-greenblue-deployments-ab-tes
ting-and-canary-releases/

C - Switching the load balancer from pointing at the green "good" environment to the
blue "new" environment is a fast way to rollback if there is a problem during release.
However, long-running transactions will be disrupted by that switch.

A - Testing the application with a few users before releasing to everyone will detect
problems early and confine their impact.
B - Performing testing of features "A" with the feature, "B" without the feature, will
detect problems before release.
D - Pipeline deployment - introducing orderly procedures into the QA process can
improve the effectiveness of QA.
Scenario #24
Question

Implement back-out/rollback for website with 100s of VMs. Site has frequent
critical updates.

A. Create a Nearline copy of static data in Cloud Storage.


B. Create a snapshot of each VM prior to update, in case of failure.
C. Use managed instance groups with the “update-instances” command
when starting a rolling update.
D. Only deploy changes using Deployment Manager templates.
Scenario #24
Answer

Implement back-out/rollback for website with 100s of VMs. Site has frequent
critical updates.

A. Create a Nearline copy of static data in Cloud Storage.


B. Create a snapshot of each VM prior to update, in case of failure.
C. Use managed instance groups with the “update-instances” command
when starting a rolling update.
D. Only deploy changes using Deployment Manager templates.
Scenario #24
Rationale

C - Allows compute engine to handle updates. Easy management of VMs.

D - Large overhead and chance for version conflicts between DM templates if an old
template is changed that running infrastructure relies on.
B - Slow and expensive.
A - Unreliable recovery method. Can't roll back once the copy is overwritten.

C - Allows Compute Engine to handle updates. Easy management of VMs.

D - Large overhead and chance for version conflicts between DM templates if an old
template is changed that running infrastructure relies on.
B - Slow and expensive.
A - Unreliable recovery method. Can't roll back once the copy is overwritten.
Scenario #25
Question

Last week a region had a 1% failure rate in web tier VMs? How should you respond?

A. Monitor the application for a 5% failure rate.


B. Duplicate the application on prem to compensate for failures in the cloud.
C. Perform a root cause analysis, reviewing cloud provider and deployment
details to prevent similar future failures.
D. Halt all development until the application issue can be found and fixed.
Scenario #25
Answer

Last week a region had a 1% failure rate in web tier VMs? How should you respond?

A. Monitor the application for a 5% failure rate.


B. Duplicate the application on prem to compensate for failures in the cloud.
C. Perform a root cause analysis, reviewing cloud provider and deployment
details to prevent similar future failures.
D. Halt all development until the application issue can be found and fixed.
Scenario #25
Rationale

C - Perform root cause analysis, because you don't know from the information given
whether the issue had to do with the Cloud Provider or was in the application or
something to do with the interface between the application and cloud resources. The
goal of identifying root cause is to prevent future failures, that might include changing
procedures.
A - Raising the threshold doesn't help identify the underlying issue.
B - The assumption is that the cloud is unreliable and on prem is more reliable, so it
needs to act as a backup. That's a lot of work that might not be needed and still doesn't
find the cause.
D - The assumption is that the application is the problem. But a 1% error could be within
SLA for some services. It might not be the application at all. It could be an one-time
issue. The information doesn't tell us if this is a recurring problem.

C - Perform root cause analysis, because you don't know from the information given
whether the issue had to do with the Cloud Provider or was in the application or
something to do with the interface between the application and cloud resources. The
goal of identifying root cause is to prevent future failures, that might include changing
procedures.

A - Raising the threshold doesn't help identify the underlying issue.


B - The assumption is that the cloud is unreliable and on prem is more reliable, so it
needs to act as a backup. That's a lot of work that might not be needed and still
doesn't find the cause.
D - The assumption is that the application is the problem. But a 1% error could be
within SLA for some services. It might not be the application at all. It could be an
one-time issue. The information doesn't tell us if this is a recurring problem.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy