06 Sample Exam Questions
06 Sample Exam Questions
Scenario #1
Question
A. Cloud SQL
Web Servers Web Servers
C. Datastore
D. Cloud Storage
Region 1 Region 2
?
Scenario #1
Answer
A. Cloud SQL
Web Servers Web Servers
C. Datastore
D. Cloud Storage
Region 1 Region 2
?
Scenario #1
Rationale
automatically.
Region 1 Region 2
C - "HTTP(S) Load Balancing has native support for the WebSocket protocol. Backends that
use WebSocket to communicate with clients can use the HTTP(S) load balancer as a front
end, for scale and availability. The load balancer does not need any additional configuration
to proxy WebSocket connections."
A and B - There is nothing inherent about websockets that requires a redesign to run on
Google Cloud.
https://cloud.google.com/load-balancing/docs/https/#websocket_proxy_support
C - "HTTP(S) Load Balancing has native support for the WebSocket protocol.
Backends that use WebSocket to communicate with clients can use the HTTP(S) load
balancer as a front end, for scale and availability. The load balancer does not need
any additional configuration to proxy WebSocket connections."
B. Dataflow Cloud
Storage
C. Pub/Sub Processed
Images
D. Cloud Bigtable
Metadata
Pub/Sub Dataflow
Cloud
SQL
Scenario #3
Answer
Cloud
Which solution is required to trigger Storage
Uploaded
a Cloud Function, so it can ingest Images
and process an image in the image
Cloud Vision
tagging pipeline? Functions API
Image
A. Datastore ? tagging
B. Dataflow Cloud
Storage
C. Pub/Sub Processed
Images
D. Cloud Bigtable
Metadata
Pub/Sub Dataflow
Cloud
SQL
Scenario #3
Rationale
C - Cloud Storage upload events can push Pub/Sub to trigger a Cloud Function to ingest and
process the image.
B - Dataflow would have nothing to do here but receive an image and call a Cloud Function.
A - Datastore is not for storing images.
D - Cloud Bigtable is not for storing images.
C - Cloud Storage upload events can push Pub/Sub to trigger a Cloud Function to
ingest and process the image.
B - Dataflow would have nothing to do here but receive an image and call a Cloud
Function.
A - Datastore is not for storing images.
D - Cloud Bigtable is not for storing images.
Scenario #4
Question
How would you store data to be accessed once a month and not needed after five years.
How would you store data to be accessed once a month and not needed after five years.
A - Cloud IoT Core does not publish to other services and it doesn't store data.
B - Pub/Sub does not do device management.
D - In theory, an App Engine application could duplicate the functions of Cloud IoT Core, but
since Cloud IoT Core only publishes to Pub/Sub, in position 2, it would not communicate
with either Cloud Functions or Dataflow.
https://cloud.google.com/iot-core/
https://cloud.google.com/solutions/iot/ <- Pub/Sub's role in IoT
C - "Device data captured by Cloud IoT Core gets published to Pub/Sub"
A - Cloud IoT Core does not publish to other services and it doesn't store data.
B - Pub/Sub does not do device management.
D - In theory, an App Engine application could duplicate the functions of Cloud IoT
Core, but since Cloud IoT Core only publishes to Pub/Sub, in position 2, it would not
communicate with either Cloud Functions or Dataflow.
Scenario #6
Question
Which service for a multi-petabyte database for analysts that only know SQL and must be
available 24 x 7?
A. Cloud Storage
B. Cloud SQL
C. BigQuery
D. Datastore
Scenario #6
Answer
Which service for a multi-petabyte database for analysts that only know SQL and must be
available 24 x 7?
A. Cloud Storage
B. Cloud SQL
C. BigQuery
D. Datastore
Scenario #6
Rationale
C - BigQuery SLA is 99.9%, meeting the uptime requirement, and it has an SQL interface.
C - BigQuery SLA is 99.9%, meeting the uptime requirement, and it has an SQL
interface.
A - Cloud Storage has no SQL interface.
B - Cloud SQL has the SLA and SQL, but not the capacity.
D - Datastore has no SQL interface.
Scenario #7
Question
A. Pub/Sub
B. Cloud Container Builder
? Container
Registry
C. Cloud Storage
D. Dataproc
Jenkins
Compute Google
Engine Kubernetes
Engine
Scenario #7
Answer
A. Pub/Sub
B. Cloud Container Builder
? Container
Registry
C. Cloud Storage
D. Dataproc
Jenkins
Compute Google
Engine Kubernetes
Engine
B - Container Builder
Container Builder builds docker images from source repositories.
None of the other services build docker images.
Scenario #7
Rationale
B - Cloud Build
Cloud Build builds docker images from source repositories.
A, C, D - None of the other services build docker images.
Scenario #8
Question
A. Use Linux dd and netcat to stream the root disk to the new VM.
B. Snapshot the root disk and select it for the new VM.
C. Create an image from the root disk with Linux dd, create a disk from the
image, and use it in the new VM.
D. Snapshot the root disk, create an image, and use the image for the new VM
root disk.
Scenario #8
Answer
A. Use Linux dd and netcat to stream the root disk to the new VM.
B. Snapshot the root disk and select it for the new VM.
C. Create an image from the root disk with Linux dd, create a disk from the
image, and use it in the new VM.
D. Snapshot the root disk, create an image, and use the image for the new VM
root disk.
Scenario #8
Rationale
D - Will work across project and region, and it is a simple and reliable method.
D - Will work across project and region, and it is a simple and reliable method.
Helicopter Racing security has locked out SSH access to production VMs. How
can operations manage the VMs?
Helicopter Racing security has locked out SSH access to production VMs. How
can operations manage the VMs?
C - The operations team doesn't actually need SSH access to manage VMs. All it needs is Cloud
Shell with the Cloud SDK and gcloud tools. Cloud Shell provides all the tools for managing
Compute Engine instances. In this case the assumption that SSH access is needed is incorrect.
A - A VPN is a way to connect from remote to the internal IP of an instance. If SSH is blocked
everywhere, this work-around won't help.
B - Developing an application that would use the Cloud API would be redundant with the gcloud
command line tool.
D - An application the provides temporary access to SSH is basically just violating the security
practices.
C - The operations team doesn't actually need SSH access to manage VMs. All it
needs is Cloud Shell with the Cloud SDK and gcloud tools.
Cloud Shell provides all the tools for managing Compute Engine instances. In this
case the assumption that SSH access is needed is incorrect.
C - TerramEarth already has 200TB+ of data and is in a growth phase. Therefore they must
be concerned that the solution will be supportable as they "undergo the next wave of
transformations in our industry". Also, TerramEarth seeks a competitive advantage
through "incremental innovations" which can come from data insights using BigQuery and
AI Platform.
C - TerramEarth already has 200TB+ of data and is in a growth phase. Therefore they
must be concerned that the solution will be supportable as they "undergo the next
wave of transformations in our industry". Also, TerramEarth seeks a competitive
advantage through "incremental innovations" which can come from data insights
using BigQuery and AI Platform.
B and D -- TerramEarth is not price sensitive. It is more concerned with facing
competitive threats.
A - Google's years of experience might be a persuasive reason for TerramEarth to
choose Google Cloud, but time with any specific technology is not a stated business
requirement.
Scenario #12
Question
How can MountKirk Games meet its scaling requirements while providing
insights to investors?
How can MountKirk Games meet its scaling requirements while providing
insights to investors?
B - Cloud Monitoring custom metrics can be crafted to expose specific game activities, which can be useful
for autoscaling and provide a more detailed source of indicators for the targeted marketing investors require.
Cloud Operations is a fully managed service.
Technical Requirements: Game Backend - "Dynamically scale up or down based on game activity." Game
Analytics - "Dynamically scale up or down based on game activity." Game Analytics - "Use only fully managed
services."
A - The current game statistics are not real-time, but loaded into MySQL by ETL, so they cannot be used for
autoscaling. Using BigQuery for analysis may provide better insights, but since game activity is disconnected
from resource provisioning (there is no feedback loop), the marketing insights might not be valid.
C - Google Data Studio might be a way to share metrics with investors so they can explore the data
themselves. That is nice, but it does not satisfy business or technical requirements or solve any practical
problems described in the case. Autoscaling on CPU has a poor correlation to user experience.
D - Network latency is a better measure of user experience for autoscaling than CPU load, but not as good as
game activity.
Technical Requirements:
Game Backend - "Dynamically scale up or down based on game activity."
Game Analytics - "Dynamically scale up or down based on game activity."
Game Analytics - "Use only fully managed services."
A - The current game statistics are not real-time, but loaded into MySQL by ETL, so
they cannot be used for autoscaling. Using BigQuery for analysis may provide better
insights, but since game activity is disconnected from resource provisioning (there is
no feedback loop), the marketing insights might not be valid.
C - Google Data Studio might be a way to share metrics with investors so they can
explore the data themselves. That is nice, but it does not satisfy business or technical
requirements or solve any practical problems described in the case. Autoscaling on
CPU has a poor correlation to user experience.
D - Network latency is a better measure of user experience for autoscaling than CPU
load, but not as good as game activity. And it does not provide detailed metrics that
can be used to understand game usage patterns for marketing.
Scenario #13
Question
How to test a risky update to an App Engine application requiring live traffic?
How to test a risky update to an App Engine application requiring live traffic?
D - Deploying a new version, but not as default, is easily reversed. Traffic splitting enables
testing with some live traffic, meeting the requirement.
D - Deploying a new version, but not as default, is easily reversed. Traffic splitting
enables testing with some live traffic, meeting the requirement.
A - Deploying as default moves all traffic to it.
B - Possible, but requires data synchronization and separate traffic splitting. So this is
a complicated approach.
C - App Engine services are intended for hosting different service logic. Using
different services would require manual configuration of the consumers of services to
be aware of the deployment process and manage from the consumer side who is
accessing which service. A complicated approach.
Scenario #14
Question
Cloud Load
Balancing
How do you automatically and simultaneously
deploy new code to each cluster?
A microservice has intermittent problems that bursts logs. How can you trap it for
live debugging?
A. Log into machine with microservice and wait for the log messages.
B. Look for error in Error Reporting dashboard.
C. Configure microservice to send traces to Cloud Trace.
D. Set a log metric in Cloud Logging, alert on it past a threshold.
Scenario #15
Answer
A microservice has intermittent problems that bursts logs. How can you trap it for
live debugging?
A. Log into machine with microservice and wait for the log messages.
B. Look for error in Error Reporting dashboard.
C. Configure microservice to send traces to Cloud Trace.
D. Set a log metric in Cloud Logging, alert on it past a threshold.
Scenario #15
Rationale
D - A Cloud Logging metric can identify a burst of log lines. You can set an alert. Then
connect to the machine while the problem is happening.
D - A Cloud Logging metric can identify a burst of log lines. You can set an alert. Then
connect to the machine while the problem is happening.
A - Chances of catching it on one machine is low.
B - Error reporting won’t necessarily catch the log lines unless they are stack traces
in the proper format. Additionally just because there is a pattern doesn’t mean you will
know exactly when and where to log in to debug
C - Trace may tell you where time is being spent but wont let you hone in on the exact
host that the problem is occurring on because you generally only send samples of
traces. There is also no alerting on traces to notify exactly when the problem is
happening.
Scenario #16
Question
A company wants penetration security testing that primarily matches an end user
perspective. What action would you take?
A company wants penetration security testing that primarily matches an end user
perspective. What action would you take?
D - On prem scanners will approach from outside, and over the public internet is where the
users are.
D - on prem scanners will approach from outside, and over the public internet is
where the users are.
A - Google doesn't require notification for this.
B - Scanners in the cloud wouldn't meet the "end user perspective"
C - VPN wouldn't match "end user perspective"
Scenario #17
Question
A sales company runs weekly resiliency tests of the current build in a separate
environment by replaying the last holiday sales load. What can improve resiliency?
A sales company runs weekly resiliency tests of the current build in a separate
environment by replaying the last holiday sales load. What can improve resiliency?
D - the goal is resiliency -- to see that the application continues to run and "bounces back"
after the outage is over. Simulating a zone outage is one way to ensure that the
application can really handle the loss of a zone.
A - Applying twice the load doesn't necessarily prove resiliency. That would be to test
scale, which might be useful for future growth planning.
B - It is not clear why running the same tests more frequently would help with resilience. It
might surface issues a few days earlier but at 7x the cost is it worthwhile?
C - Preemptible instances would reduce the cost of the test, but it doesn't prove that the
application is resilient.
D - the goal is resiliency -- to see that the application continues to run and "bounces
back" after the outage is over. Simulating a zone outage is one way to ensure that the
application can really handle the loss of a zone.
A - Applying twice the load doesn't necessarily prove resiliency. That would be to test
scale, which might be useful for future growth planning.
B - It is not clear why running the same tests more frequently would help with
resilience. It might surface issues a few days earlier but at 7x the cost is it worthwhile?
C - Preemptible instances would reduce the cost of the test, but it doesn't prove that
the application is resilient.
Scenario #18
Question
B - Smaller functional units means smaller releases with less "surface area" for
problems to occur. More incremental rollouts. Fewer rollbacks.
https://www.testingexcellence.com/difference-between-greenblue-deployments-ab-tes
ting-and-canary-releases/
B - Smaller functional units means smaller releases with less "surface area" for
problems to occur. More incremental rollouts. Fewer rollbacks.
C - Canary doesn't replace QA. It should be added. Plus, QA is proven to work.
A - NoSQL database offers no quality advantage over relational databases.
D - There is nothing inherent in a relational database that makes it impact the quality
of releases.
Additional Questions
Scenario #19
Question
How will the application parts developed by separate project teams communicate
over RFC1918 addresses?
How will the application parts developed by separate project teams communicate
over RFC1918 addresses?
B - Each team has their own project but communicates securely over a single
RFC1918 address space.
A - No separation.
C - Doesn't specify separate projects, therefore doesn't meet business requirements.
D - External IPs do not conform to address technical requirements.
B - each team has their own project but communicates securely over a single
RFC1918 address space.
A - no separation.
C - Doesn't specify separate projects, therefore doesn't meet business requirements.
D - external IPs do not conform to address technical requirements.
Scenario #20
Question
How can you minimize the cost of storing security video files that are processed
repeatedly for 30 days?
A. Standard Storage, then move to Coldline Storage or Archive Storage after 30 days.
B. Nearline Storage, then move to Coldline Storage after 30 days.
C. Standard Storage, then move to Nearline Storage after 30 days.
D. Keep the files in Standard Storage.
Scenario #20
Answer
How can you minimize the cost of storing security video files that are processed
repeatedly for 30 days?
A. Standard Storage, then move to Coldline Storage or Archive Storage after 30 days.
B. Nearline Storage, then move to Coldline Storage after 30 days.
C. Standard Storage, then move to Nearline Storage after 30 days.
D. Keep the files in Standard Storage.
Scenario #20
Rationale
A - Standard Storage for lowest access costs over the 30 days, then Coldline Storage or
Archive Storage because it is unlikely to be read after the 30 days.
B - Using Nearline Storage over the 30 days won't be cost effective because the data is
accessed too frequently. There is also a 30 day minimum storage duration.
C - Moving from Standard Storage to Nearline Storage after the 30 days isn’t as cost
effective as Coldline Storage or Archive Storage if the data is not going to be accessed that
frequently.
D - Keeping the data in Standard Storage is the least cost effective option if it is not going to
be accessed frequently after 30 days.
Scenario #21
Question
A company’s security team has decided to standardize on AES256 for storage device
encryption. Which strategy should be used with Compute Engine instances?
A company’s security team has decided to standardize on AES256 for storage device
encryption. Which strategy should be used with Compute Engine instances?
A - Selection of disk type determines the default method for whole-disk encryption. HDDs
use AES128 and SDDs use AES256.
A - Selection of disk type determines the default method for whole-disk encryption.
HDDs use AES128 and SDDs use AES256.
B - This would be redundant with Compute Engine disk encryption.
C - Who manages the keys has nothing to do with whether it is AES128 or AES256.
D - File encryption is a different layer. The standard is for device encryption.
https://cloud.google.com/compute/docs/disks/customer-supplied-encryption
https://cloud.google.com/security/encryption-at-rest/default-encryption/
"In addition to the storage system level encryption described above, in most cases
data is also encrypted at the storage device level, with at least AES128 for hard disks
(HDD) and AES256 for new solid state drives (SSD), using a separate device-level
key (which is different than the key used to encrypt the data at the storage level). As
older devices are replaced, solely AES256 will be used for device-level encryption."
Scenario #22
Question
Which Cloud IAM roles would you assign for security auditors requiring visibility
across all projects?
Which Cloud IAM roles would you assign for security auditors requiring visibility
across all projects?
C - Switching the load balancer from pointing at the green "good" environment to the blue
"new" environment is a fast way to rollback if there is a problem during release. However,
long-running transactions will be disrupted by that switch.
A - Testing the application with a few users before releasing to everyone will detect
problems early and confine their impact.
B - Performing testing of features "A" with the feature, "B" without the feature, will detect
problems before release.
D - Pipeline deployment - introducing orderly procedures into the QA process can improve
the effectiveness of QA.
https://www.testingexcellence.com/difference-between-greenblue-deployments-ab-tes
ting-and-canary-releases/
C - Switching the load balancer from pointing at the green "good" environment to the
blue "new" environment is a fast way to rollback if there is a problem during release.
However, long-running transactions will be disrupted by that switch.
A - Testing the application with a few users before releasing to everyone will detect
problems early and confine their impact.
B - Performing testing of features "A" with the feature, "B" without the feature, will
detect problems before release.
D - Pipeline deployment - introducing orderly procedures into the QA process can
improve the effectiveness of QA.
Scenario #24
Question
Implement back-out/rollback for website with 100s of VMs. Site has frequent
critical updates.
Implement back-out/rollback for website with 100s of VMs. Site has frequent
critical updates.
D - Large overhead and chance for version conflicts between DM templates if an old
template is changed that running infrastructure relies on.
B - Slow and expensive.
A - Unreliable recovery method. Can't roll back once the copy is overwritten.
D - Large overhead and chance for version conflicts between DM templates if an old
template is changed that running infrastructure relies on.
B - Slow and expensive.
A - Unreliable recovery method. Can't roll back once the copy is overwritten.
Scenario #25
Question
Last week a region had a 1% failure rate in web tier VMs? How should you respond?
Last week a region had a 1% failure rate in web tier VMs? How should you respond?
C - Perform root cause analysis, because you don't know from the information given
whether the issue had to do with the Cloud Provider or was in the application or
something to do with the interface between the application and cloud resources. The
goal of identifying root cause is to prevent future failures, that might include changing
procedures.
A - Raising the threshold doesn't help identify the underlying issue.
B - The assumption is that the cloud is unreliable and on prem is more reliable, so it
needs to act as a backup. That's a lot of work that might not be needed and still doesn't
find the cause.
D - The assumption is that the application is the problem. But a 1% error could be within
SLA for some services. It might not be the application at all. It could be an one-time
issue. The information doesn't tell us if this is a recurring problem.
C - Perform root cause analysis, because you don't know from the information given
whether the issue had to do with the Cloud Provider or was in the application or
something to do with the interface between the application and cloud resources. The
goal of identifying root cause is to prevent future failures, that might include changing
procedures.