GCP Ace Questions
GCP Ace Questions
You are building a backend service for an ecommerce platform that will persist transaction
data from mobile and web clients. After the platform is launched, you expect a large volume of
global transactions. Your business team wants to run SQL queries to analyze the data. You
need to build a highly available and scalable data store for the platform. What should you do?
● A. Create a Service Account in your own project, and grant this Service Account access
to BigQuery in your project.
● B. Create a Service Account in your own project, and ask the partner to grant this
Service Account access to BigQuery in their project.
● C. Ask the partner to create a Service Account in their project, and have them give the
Service Account access to BigQuery in their project.
● D. Ask the partner to create a Service Account in their project, and grant their Service
Account access to the BigQuery dataset in your project
2.Your preview application, deployed on a single-zone Google Kubernetes Engine (GKE) cluster
in us-central1, has gained popularity. You are now ready to make the application generally
available. You need to deploy the application to production while ensuring high availability and
resilience. You also want to follow Google-recommended practices. What should you do?
● A. Use the gcloud container clusters create command with the options --enable-multi-
networking and --enable-autoscaling to create an autoscaling zonal cluster and deploy
the application to it.
● B. Use the gcloud container clusters create-auto command to create an autopilot
cluster and deploy the application to it.
● C. Use the gcloud container clusters update command with the option --region us-
central1 to update the cluster and deploy the application to it.
● D. Use the gcloud container clusters update command with the option --node-locations
us-central1-a,us-central1-b to update the cluster and deploy the application to the
nodes.
3. Your manager asks you to deploy a workload to a Kubernetes cluster. You are not sure of the
workload's resource requirements or how the requirements might vary depending on usage
patterns, external dependencies, or other factors. You need a solution that makes cost-
effective recommendations regarding CPU and memory requirements, and allows the workload
to function consistently in any situation. You want to follow Google-recommended practices.
What should you do?
● A. Configure the Horizontal Pod Autoscaler for availability, and configure the cluster
autoscaler for suggestions.
● B. Configure the Horizontal Pod Autoscaler for availability, and configure the Vertical
Pod Autoscaler recommendations for suggestions.
● C. Configure the Vertical Pod Autoscaler recommendations for availability, and
configure the Cluster autoscaler for suggestions.
● D. Configure the Vertical Pod Autoscaler recommendations for availability, and
configure the Horizontal Pod Autoscaler for suggestions.
4. You need to migrate invoice documents stored on-premises to Cloud Storage. The
documents have the following storage requirements:
● A. Enable retention policies on the bucket, and use Cloud Scheduler to invoke a Cloud
Function to move or delete your documents based on their metadata.
● B. Enable retention policies on the bucket, use lifecycle rules to change the storage
classes of the objects, set the number of versions, and delete old files.
● C. Enable object versioning on the bucket, and use Cloud Scheduler to invoke a Cloud
Functions instance to move or delete your documents based on their metadata.
● D. Enable object versioning on the bucket, use lifecycle conditions to change the
storage class of the objects, set the number of versions, and delete old files.
5. You have a Bigtable instance that consists of three nodes that store personally identifiable
information (PII) data. You need to log all read or write operations, including any metadata or
configuration reads of this database table, in your company’s Security Information and Event
Management (SIEM) system. What should you do?
● A. • Navigate to Cloud Monitoring in the Google Cloud console, and create a custom
monitoring job for the Bigtable instance to track all changes.
• Create an alert by using webhook endpoints, with the SIEM endpoint as a receiver.
● B. • Navigate to the Audit Logs page in the Google Cloud console, and enable Admin
Write logs for the Bigtable instance.
• Create a Cloud Functions instance to export logs from Cloud Logging to your SIEM.
● C. • Navigate to the Audit Logs page in the Google Cloud console, and enable Data
Read, Data Write and Admin Read logs for the Bigtable instance.
• Create a Pub/Sub topic as a Cloud Logging sink destination, and add your SIEM as a
subscriber to the topic.
● D. • Install the Ops Agent on the Bigtable instance during configuration.
• Create a service account with read permissions for the Bigtable instance.
• Create a custom Dataflow job with this service account to export logs to the
company’s SIEM system.
6. An application generates daily reports in a Compute Engine virtual machine (VM). The VM is
in the project corp-iot-insights. Your team operates only in the project corp-aggregate-reports
and needs a copy of the daily exports in the bucket corp-aggregate-reports-storage. You want
to configure access so that the daily reports from the VM are available in the bucket corp-
aggregate-reports-storage and use as few steps as possible while following Google-
recommended practices. What should you do?
7. Your company wants to migrate their on-premises workloads to Google Cloud. The current
on-premises workloads consist of:
You need to keep operational costs low. You want to follow Google-recommended practices to
migrate these workloads to serverless solutions on Google Cloud. What should you do?
● A. Migrate the web application to App Engine and the backend API to Cloud Run. Use
Cloud Tasks to run your background job on Compute Engine.
● B. Migrate the web application to App Engine and the backend API to Cloud Run. Use
Cloud Tasks to run your background job on Cloud Run.
● C. Run the web application on a Cloud Storage bucket and the backend API on Cloud
Run. Use Cloud Tasks to run your background job on Cloud Run.
● D. Run the web application on a Cloud Storage bucket and the backend API on Cloud
Run. Use Cloud Tasks to run your background job on Compute Engine.
8. You need to manage a third-party application that will run on a Compute Engine instance.
Other Compute Engine instances are already running with default configuration. Application
installation files are hosted on Cloud Storage. You need to access these files from the new
instance without allowing other virtual machines (VMs) to access these files. What should you
do?
● A. Create the instance with the default Compute Engine service account. Grant the
service account permissions on Cloud Storage.
● B. Create the instance with the default Compute Engine service account. Add metadata
to the objects on Cloud Storage that matches the metadata on the new instance.
● C. Create a new service account and assign this service account to the new instance.
Grant the service account permissions on Cloud Storage.
● D. Create a new service account and assign this service account to the new instance.
Add metadata to the objects on Cloud Storage that matches the metadata on the new
instance
9. You want to deploy a new containerized application into Google Cloud by using a Kubernetes
manifest. You want to have full control over the Kubernetes deployment, and at the same time,
you want to minimize configuring infrastructure. What should you do?
10. Your company completed the acquisition of a startup and is now merging the IT systems of
both companies. The startup had a production Google Cloud project in their organization. You
need to move this project into your organization and ensure that the project is billed to your
organization. You want to accomplish this task with minimal effort. What should you do?
● A. Use the projects.move method to move the project to your organization. Update the
billing account of the project to that of your organization.
● B. Ensure that you have an Organization Administrator Identity and Access
Management (IAM) role assigned to you in both organizations. Navigate to the
Resource Manager in the startup’s Google Cloud organization, and drag the project to
your company's organization.
● C. Create a Private Catalog for the Google Cloud Marketplace, and upload the resources
of the startup's production project to the Catalog. Share the Catalog with your
organization, and deploy the resources in your company’s project.
● D. Create an infrastructure-as-code template for all resources in the project by using
Terraform, and deploy that template to a new project in your organization. Delete the
project from the startup’s Google Cloud organization.
11. Your continuous integration and delivery (CI/CD) server can’t execute Google Cloud actions
in a specific project because of permission issues. You need to validate whether the used
service account has the appropriate roles in the specific project.
● A. Open the Google Cloud console, and check the Identity and Access Management
(IAM) roles assigned to the service account at the project or inherited from the folder or
organization levels.
● B. Open the Google Cloud console, and check the organization policies.
● C. Open the Google Cloud console, and run a query to determine which resources this
service account can access.
● D. Open the Google Cloud console, and run a query of the audit logs to find permission
denied errors for this service account.
12. Your company has embraced a hybrid cloud strategy where some of the applications are
deployed on Google Cloud. A Virtual Private Network (VPN) tunnel connects your Virtual Private
Cloud (VPC) in Google Cloud with your company's on-premises network. Multiple applications
in Google Cloud need to connect to an on-premises database server, and you want to avoid
having to change the IP configuration in all of your applications when the IP of the database
changes.
What should you do?
● A. Configure Cloud NAT for all subnets of your VPC to be used when egressing from the
VM instances.
● B. Create a private zone on Cloud DNS, and configure the applications with the DNS
name.
● C. Configure the IP of the database as custom metadata for each instance, and query
the metadata server.
● D. Query the Compute Engine internal DNS from the applications to retrieve the IP of the
database.
13. Your company has multiple projects linked to a single billing account in Google Cloud. You
need to visualize the costs with specific metrics that should be dynamically calculated based
on company-specific criteria. You want to automate the process. What should you do?
● A. In the Google Cloud console, visualize the costs related to the projects in the Reports
section.
● B. In the Google Cloud console, visualize the costs related to the projects in the Cost
breakdown section.
● C. In the Google Cloud console, use the export functionality of the Cost table. Create a
Looker Studio dashboard on top of the CSV export.
● D. Configure Cloud Billing data export to BigQuery for the billing account. Create a
Looker Studio dashboard on top of the BigQuery export.
14. Your company is running a three-tier web application on virtual machines that use a MySQL
database. You need to create an estimated total cost of cloud infrastructure to run this
application on Google Cloud instances and Cloud SQL. What should you do?
15. You are running a data warehouse on BigQuery. A partner company is offering a
recommendation engine based on the data in your data warehouse. The partner company is
also running their application on Google Cloud. They manage the resources in their own project,
but they need access to the BigQuery dataset in your project. You want to provide the partner
company with access to the dataset. What should you do?
● A. Create a Service Account in your own project, and grant this Service Account access
to BigQuery in your project.
● B. Create a Service Account in your own project, and ask the partner to grant this
Service Account access to BigQuery in their project.
● C. Ask the partner to create a Service Account in their project, and have them give the
Service Account access to BigQuery in their project.
● D. Ask the partner to create a Service Account in their project, and grant their Service
Account access to the BigQuery dataset in your project.
16. You have just created a new project which will be used to deploy a globally distributed
application. You will use Cloud Spanner for data storage. You want to create a Cloud Spanner
instance. You want to perform the first step in preparation of creating the instance. What
should you do?
17. You are a Google Cloud organization administrator. You need to configure organization
policies and log sinks on Google Cloud projects that cannot be removed by project users to
comply with your company's security policies. The security policies are different for each
company department. Each company department has a user with the Project Owner role
assigned to their projects. What should you do?
● A. B. Use a standard naming convention for projects that includes the department
name. Configure both organization policies and log sinks on the projects.
● C. Organize projects under folders for each department. Configure both organization
policies and log sinks on the folders.
● D. Organize projects under folders for each department. Configure organization policies
on the organization and log sinks on the folders.
18. You are planning to migrate your on-premises data to Google Cloud. The data includes:
● A. Use gcloud storage for the video files, Dataflow for the data warehouse data, and
Storage Transfer Service for the PNG files.
● B. Use Transfer Appliance for the videos, BigQuery Data Transfer Service for the data
warehouse data, and Storage Transfer Service for the PNG files.
● C. Use Storage Transfer Service for the video files, BigQuery Data Transfer Service for
the data warehouse data, and Storage Transfer Service for the PNG files.
● D. Use Cloud Data Fusion for the video files, Dataflow for the data warehouse data, and
Storage Transfer Service for the PNG files.
19. You need to deploy a single stateless web application with a web interface and multiple
endpoints. For security reasons, the web application must be reachable from an internal IP
address from your company's private VPC and on-premises network. You also need to update
the web application multiple times per day with minimal effort and want to manage a minimal
amount of cloud infrastructure. What should you do?
● A. Deploy the web application on Google Kubernetes Engine standard edition with an
internal ingress.
● B. Deploy the web application on Cloud Run with Private Google Access configured.
● C. Deploy the web application on Cloud Run with Private Service Connect configured.
● D. Deploy the web application to GKE Autopilot with Private Google Access configured.
20. You are migrating a business critical application from your local data center into Google
Cloud. As part of your high-availability strategy, you want to ensure that any data used by the
application will be immediately available if a zonal failure occurs. What should you do?
● A. Store the application data on a zonal persistent disk. Create a snapshot schedule for
the disk. If an outage occurs, create a new disk from the most recent snapshot and
attach it to a new VM in another zone.
● B. Store the application data on a zonal persistent disk. If an outage occurs, create an
instance in another zone with this disk attached.
● C. Store the application data on a regional persistent disk. Create a snapshot schedule
for the disk. If an outage occurs, create a new disk from the most recent snapshot and
attach it to a new VM in another zone.
● D. Store the application data on a regional persistent disk. If an outage occurs, create
an instance in another zone with this disk attached.
21. Your web application is hosted on Cloud Run and needs to query a Cloud SQL database.
Every morning during a traffic spike, you notice API quota errors in Cloud SQL logs. The project
has already reached the maximum API quota. You want to make a configuration change to
mitigate the issue. What should you do?
● A. Modify the minimum number of Cloud Run instances.
● B. Use traffic splitting.
● C. Modify the maximum number of Cloud Run instances.
● D. Set a minimum concurrent requests environment variable for the application.
22. You want to host your video encoding software on Compute Engine. Your user base is
growing rapidly, and users need to be able to encode their videos at any time without
interruption or CPU limitations. You must ensure that your encoding solution is highly available,
and you want to follow Google-recommended practices to automate operations. What should
you do?
● A. Deploy your solution on multiple standalone Compute Engine instances, and increase
the number of existing instances when CPU utilization on Cloud Monitoring reaches a
certain threshold.
● B. Deploy your solution on multiple standalone Compute Engine instances, and replace
existing instances with high-CPU instances when CPU utilization on Cloud Monitoring
reaches a certain threshold.
● C. Deploy your solution to an instance group, and increase the number of available
instances whenever you see high CPU utilization in Cloud Monitoring.
● D. Deploy your solution to an instance group, and set the autoscaling based on CPU
utilization.
23. You are building a backend service for an ecommerce platform that will persist transaction
data from mobile and web clients. After the platform is launched, you expect a large volume of
global transactions. Your business team wants to run SQL queries to analyze the data. You
need to build a highly available and scalable data store for the platform. What should you do?
24. Your company wants to migrate their on-premises workloads to Google Cloud. The current
on-premises workloads consist of:
You need to keep operational costs low. You want to follow Google-recommended practices to
migrate these workloads to serverless solutions on Google Cloud. What should you do?
● A. Migrate the web application to App Engine and the backend API to Cloud Run. Use
Cloud Tasks to run your background job on Compute Engine.
● B. Migrate the web application to App Engine and the backend API to Cloud Run. Use
Cloud Tasks to run your background job on Cloud Run.
● C. Run the web application on a Cloud Storage bucket and the backend API on Cloud
Run. Use Cloud Tasks to run your background job on Cloud Run.
● D. Run the web application on a Cloud Storage bucket and the backend API on Cloud
Run. Use Cloud Tasks to run your background job on Compute Engine.
25. You recently received a new Google Cloud project with an attached billing account where
you will work. You need to create instances, set firewalls, and store data in Cloud Storage. You
want to follow Google-recommended practices. What should you do?
26. Your company requires that Google Cloud products are created with a specific
configuration to comply with your company’s security policies. You need to implement a
mechanism that will allow software engineers at your company to deploy and update Google
Cloud products in a preconfigured and approved manner. What should you do?
● A. Create Java packages that utilize the Google Cloud Client Libraries for Java to
configure Google Cloud products. Store and share the packages in a source code
repository.
● B. Create bash scripts that utilize the Google Cloud CLI to configure Google Cloud
products. Store and share the bash scripts in a source code repository.
● C. Use the Google Cloud APIs by using curl to configure Google Cloud products. Store
and share the curl commands in a source code repository.
● D. Create Terraform modules that utilize the Google Cloud Terraform Provider to
configure Google Cloud products. Store and share the modules in a source code
repository.
27. You are working for a startup that was officially registered as a business 6 months ago. As
your customer base grows, your use of Google Cloud increases. You want to allow all engineers
to create new projects without asking them for their credit card information. What should you
do?
● A. Create a Billing account, associate a payment method with it, and provide all project
creators with permission to associate that billing account with their projects.
● B. Grant all engineers permission to create their own billing accounts for each new
project.
● C. Apply for monthly invoiced billing, and have a single invoice for the project paid by
the finance team.
● D. Create a billing account, associate it with a monthly purchase order (PO), and send
the PO to Google Cloud.
28. Your team is using Linux instances on Google Cloud. You need to ensure that your team
logs in to these instances in the most secure and cost efficient way. What should you do?
● A. Attach a public IP to the instances and allow incoming connections from the internet
on port 22 for SSH.
● B. Use the gcloud compute ssh command with the --tunnel-through-iap flag. Allow
ingress traffic from the IP range 35.235.240.0/20 on port 22.
● C. Use a third party tool to provide remote access to the instances.
● D. Create a bastion host with public internet access. Create the SSH tunnel to the
instance through the bastion host.
29. You use Cloud Logging to capture application logs. You now need to use SQL to analyze the
application logs in Cloud Logging, and you want to follow Google-recommended practices.
What should you do?
30. During a recent audit of your existing Google Cloud resources, you discovered several users
with email addresses outside of your Google Workspace domain. You want to ensure that your
resources are only shared with users whose email addresses match your domain. You need to
remove any mismatched users, and you want to avoid having to audit your resources to identify
mismatched users. What should you do?
● A. Create a Cloud Scheduler task to regularly scan your projects and delete mismatched
users.
● B. Create a Cloud Scheduler task to regularly scan your resources and delete
mismatched users.
● C. Set an organizational policy constraint to limit identities by domain to automatically
remove mismatched users.
● D. Set an organizational policy constraint to limit identities by domain, and then
retroactively remove the existing mismatched users
31. Your company requires all developers to have the same permissions, regardless of the
Google Cloud project they are working on. Your company’s security policy also restricts
developer permissions to Compute Engine, Cloud Functions, and Cloud SQL. You want to
implement the security policy with minimal effort. What should you do?
● A. • Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL
permissions in one project within the Google Cloud organization.
• Copy the role across all projects created within the organization with the gcloud iam
roles copy command.
• Assign the role to developers in those projects.
● B. • Add all developers to a Google group in Google Groups for Workspace.
• Assign the predefined role of Compute Admin to the Google group at the Google Cloud
organization level.
● C. • Add all developers to a Google group in Cloud Identity.
• Assign predefined roles for Compute Engine, Cloud Functions, and Cloud SQL
permissions to the Google group for each project in the Google Cloud organization.
● D. • Add all developers to a Google group in Cloud Identity.
• Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL
permissions at the Google Cloud organization level.
• Assign the custom role to the Google group.
32. You want to permanently delete a Pub/Sub topic managed by Config Connector in your
Google Cloud project. What should you do?
● A. Use kubectl to create the label deleted-by-cnrm and to change its value to true for the
topic resource.
● B. Use kubectl to delete the topic resource.
● C. Use gcloud CLI to delete the topic.
● D. Use gcloud CLI to update the topic label managed-by-cnrm to false.
33. Your manager asks you to deploy a workload to a Kubernetes cluster. You are not sure of
the workload's resource requirements or how the requirements might vary depending on usage
patterns, external dependencies, or other factors. You need a solution that makes cost-
effective recommendations regarding CPU and memory requirements, and allows the workload
to function consistently in any situation. You want to follow Google-recommended practices.
What should you do?
● A. Configure the Horizontal Pod Autoscaler for availability, and configure the cluster
autoscaler for suggestions.
● B. Configure the Horizontal Pod Autoscaler for availability, and configure the Vertical
Pod Autoscaler recommendations for suggestions.
● C. Configure the Vertical Pod Autoscaler recommendations for availability, and
configure the Cluster autoscaler for suggestions.
● D. Configure the Vertical Pod Autoscaler recommendations for availability, and
configure the Horizontal Pod Autoscaler for suggestions.
34. You are running out of primary internal IP addresses in a subnet for a custom mode VPC.
The subnet has the IP range 10.0.0.0/20, and the IP addresses are primarily used by virtual
machines in the project. You need to provide more IP addresses for the virtual machines. What
should you do?
● A. Use the gcloud container clusters create command with the options --enable-multi-
networking and --enable-autoscaling to create an autoscaling zonal cluster and deploy
the application to it.
● B. Use the gcloud container clusters create-auto command to create an autopilot
cluster and deploy the application to it.
● C. Use the gcloud container clusters update command with the option --region us-
central1 to update the cluster and deploy the application to it.
● D. Use the gcloud container clusters update command with the option --node-locations
us-central1-a,us-central1-b to update the cluster and deploy the application to the
nodes.
36. An application generates daily reports in a Compute Engine virtual machine (VM). The VM
is in the project corp-iot-insights. Your team operates only in the project corp-aggregate-reports
and needs a copy of the daily exports in the bucket corp-aggregate-reports-storage. You want
to configure access so that the daily reports from the VM are available in the bucket corp-
aggregate-reports-storage and use as few steps as possible while following Google-
recommended practices. What should you do?
37. You have deployed an application on a Compute Engine instance. An external consultant
needs to access the Linux-based instance. The consultant is connected to your corporate
network through a VPN connection, but the consultant has no Google account. What should you
do?
● A. Instruct the external consultant to use the gcloud compute ssh command line tool by
using Identity-Aware Proxy to access the instance.
● B. Instruct the external consultant to use the gcloud compute ssh command line tool by
using the public IP address of the instance to access it.
● C. Instruct the external consultant to generate an SSH key pair, and request the public
key from the consultant. Add the public key to the instance yourself, and have the
consultant access the instance through SSH with their private key.
● D. Instruct the external consultant to generate an SSH key pair, and request the private
key from the consultant. Add the private key to the instance yourself, and have the
consultant access the instance through SSH with their public key.
38. Your existing application running in Google Kubernetes Engine (GKE) consists of multiple
pods running on four GKE n1`"standard`"2 nodes. You need to deploy additional pods requiring
n2`"highmem`"16 nodes without any downtime. What should you do?
39. Your application development team has created Docker images for an application that will
be deployed on Google Cloud. Your team does not want to manage the infrastructure
associated with this application. You need to ensure that the application can scale
automatically as it gains popularity. What should you do?
● A. Create an instance template with the container image, and deploy a Managed
Instance Group with Autoscaling.
● B. Upload Docker images to Artifact Registry, and deploy the application on Google
Kubernetes Engine using Standard mode.
● C. Upload Docker images to the Cloud Storage, and deploy the application on Google
Kubernetes Engine using Standard mode.
● D. Upload Docker images to Artifact Registry, and deploy the application on Cloud Run.
40. Your organization has strict requirements to control access to Google Cloud projects. You
need to enable your Site Reliability Engineers (SREs) to approve requests from the Google
Cloud support team when an SRE opens a support case. You want to follow Google-
recommended practices. What should you do?
41. You want to deploy a new containerized application into Google Cloud by using a
Kubernetes manifest. You want to have full control over the Kubernetes deployment, and at the
same time, you want to minimize configuring infrastructure. What should you do?
42. You have two Google Cloud projects: project-a with VPC vpc-a (10.0.0.0/16) and project-b
with VPC vpc-b (10.8.0.0/16). Your frontend application resides in vpc-a and the backend API
services are deployed in vpc-b. You need to efficiently and cost-effectively enable
communication between these Google Cloud projects. You also want to follow Google-
recommended practices. What should you do?
43. You need to manage a third-party application that will run on a Compute Engine instance.
Other Compute Engine instances are already running with default configuration. Application
installation files are hosted on Cloud Storage. You need to access these files from the new
instance without allowing other virtual machines (VMs) to access these files. What should you
do?
● A. Create the instance with the default Compute Engine service account. Grant the
service account permissions on Cloud Storage.
● B. Create the instance with the default Compute Engine service account. Add metadata
to the objects on Cloud Storage that matches the metadata on the new instance.
● C. Create a new service account and assign this service account to the new instance.
Grant the service account permissions on Cloud Storage.
● D. Create a new service account and assign this service account to the new instance.
Add metadata to the objects on Cloud Storage that matches the metadata on the new
instance.
44. Your company's security vulnerability management policy wants a member of the security
team to have visibility into vulnerabilities and other OS metadata for a specific Compute Engine
instance. This Compute Engine instance hosts a critical application in your Google Cloud
project. You need to implement your company's security vulnerability management policy. What
should you do?
● A. • Ensure that the Ops Agent is installed on the Compute Engine instance.
• Create a custom metric in the Cloud Monitoring dashboard.
• Provide the security team member with access to this dashboard.
● B. • Ensure that the Ops Agent is installed on the Compute Engine instance.
• Provide the security team member roles/osconfig.inventoryViewer permission.
● C. • Ensure that the OS Config agent is installed on the Compute Engine instance.
• Provide the security team member roles/osconfig.vulnerabilityReportViewer
permission.
● D. • Ensure that the OS Config agent is installed on the Compute Engine instance.
• Create a log sink to BigQuery dataset.
• Provide the security team member with access to this dataset.
45. You just installed the Google Cloud CLI on your new corporate laptop. You need to list the
existing instances of your company on Google Cloud. What must you do before you run the
gcloud compute instances list command? (Choose two.)
● A. Run gcloud auth login, enter your login credentials in the dialog window, and paste
the received login token to gcloud CLI.
● B. Create a Google Cloud service account, and download the service account key. Place
the key file in a folder on your machine where gcloud CLI can find it.
● C. Download your Cloud Identity user account key. Place the key file in a folder on your
machine where gcloud CLI can find it.
● D. Run gcloud config set compute/zone $my_zone to set the default zone for gcloud
CLI.
● E. Run gcloud config set project $my_project to set the default project for gcloud CLI.
46. Your team is building a website that handles votes from a large user population. The
incoming votes will arrive at various rates. You want to optimize the storage and processing of
the votes. What should you do?
● A. Save the incoming votes to Firestore. Use Cloud Scheduler to trigger a Cloud
Functions instance to periodically process the votes.
● B. Use a dedicated instance to process the incoming votes. Send the votes directly to
this instance.
● C. Save the incoming votes to a JSON file on Cloud Storage. Process the votes in a
batch at the end of the day.
● D. Save the incoming votes to Pub/Sub. Use the Pub/Sub topic to trigger a Cloud
Functions instance to process the votes.
47. You are configuring Cloud DNS. You want to create DNS records to point
home.mydomain.com, mydomain.com, and www.mydomain.com to the IP address of your
Google Cloud load balancer. What should you do?
● A. Create one CNAME record to point mydomain.com to the load balancer, and create
two A records to point WWW and HOME to mydomain.com respectively.
● B. Create one CNAME record to point mydomain.com to the load balancer, and create
two AAAA records to point WWW and HOME to mydomain.com respectively.
● C. Create one A record to point mydomain.com to the load balancer, and create two
CNAME records to point WWW and HOME to mydomain.com respectively.
● D. Create one A record to point mydomain.com to the load balancer, and create two NS
records to point WWW and HOME to mydomain.com respectively.
48. You want to set up a Google Kubernetes Engine cluster. Verifiable node identity and
integrity are required for the cluster, and nodes cannot be accessed from the internet. You want
to reduce the operational cost of managing your cluster, and you want to follow Google-
recommended practices. What should you do?
● A. Create a single budget for all projects and configure budget alerts on this budget.
● B. Create a separate billing account per sandbox project and enable BigQuery billing
exports. Create a Data Studio dashboard to plot the spending per billing account.
● C. Create a budget per project and configure budget alerts on all of these budgets.
● D. Create a single billing account for all sandbox projects and enable BigQuery billing
exports. Create a Data Studio dashboard to plot the spending per project.
50. After a recent security incident, your startup company wants better insight into what is
happening in the Google Cloud environment. You need to monitor unexpected firewall changes
and instance creation. Your company prefers simple solutions. What should you do?
● A. Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute
instances to Cloud Storage. Use BigQuery to periodically analyze log events in the
storage bucket.
● B. Use Cloud Logging filters to create log-based metrics for firewall and instance
actions. Monitor the changes and set up reasonable alerts.
● C. Install Kibana on a compute instance. Create a log sink to forward Cloud Audit Logs
filtered for firewalls and compute instances to Pub/Sub. Target the Pub/Sub topic to
push messages to the Kibana instance. Analyze the logs on Kibana in real time.
● D. Turn on Google Cloud firewall rules logging, and set up alerts for any insert, update,
or delete events.
1. The DevOps group in your organization needs full control of Compute Engine resources
in your development project. However, they should not have permission to create or
update any other resources in the project. You want to follow Google’s
recommendations for setting permissions for the DevOps group. What should you do?
A. Grant the basic role roles/viewer and the predefined role roles/compute.admin to the
DevOps group.
B. Create an IAM policy and grant all compute.instanceAdmin.* permissions to the policy.
Attach the policy to the DevOps group.
C. Create a custom role at the folder level and grant all compute.instanceAdmin.* permissions
to the role. Grant the custom role to the DevOps group.
B. Deploy your solution on multiple standalone Compute Engine instances, and replace existing
instances with high-CPU instances when CPU utilization on Cloud Monitoring reaches a certain
threshold.
C. Deploy your solution to an instance group, and increase the number of available instances
whenever you see high CPU utilization in Cloud Monitoring.
D. Deploy your solution to an instance group, and set the autoscaling based on CPU utilization.
4. Your organization has strict requirements to control access to Google Cloud projects.
You need to enable your Site Reliability Engineers (SREs) to approve requests from the
Google Cloud support team when an SRE opens a support case. You want to follow
Google-recommended practices. What should you do?
A. Add your SREs to roles/iam.roleAdmin role.
C. Add your SREs to a group and then add this group to roles/iam.roleAdmin.role.
D. Add your SREs to a group and then add this group to roles/accessapproval.approver role.
5. You have a Compute Engine instance hosting an application used between 9 AM and 6
PM on weekdays. You want to back up this instance daily for disaster recovery
purposes. You want to keep the backups for 30 days. You want the Google-
recommended solution with the least management overhead and the least number of
services. What should you do?
A. 1. Update your instances' metadata to add the following value: snapshot ג€"schedule: 0 1 * *
* 2. Update your instances' metadata to add the following value: snapshotג€"retention: 30
B. 1. In the Cloud Console, go to the Compute Engine Disks page and select your instance's
disk. 2. In the Snapshot Schedule section, select Create Schedule and configure the following
parameters: - Schedule frequency: Daily - Start time: 1:00 AM ג€" 2:00 AM - Autodelete
snapshots after: 30 days
C. 1. Create a Cloud Function that creates a snapshot of your instance's disk. 2. Create a Cloud
Function that deletes snapshots that are older than 30 days. 3. Use Cloud Scheduler to trigger
both Cloud Functions daily at 1:00 AM.
D. 1. Create a bash script in the instance that copies the content of the disk to Cloud Storage.
2. Create a bash script in the instance that deletes data older than 30 days in the backup Cloud
Storage bucket. 3. Configure the instance's crontab to execute these scripts daily at 1:00 AM.
6. You are configuring service accounts for an application that spans multiple projects.
Virtual machines (VMs) running in the web-applications project need access to
BigQuery datasets in crm-databases-proj. You want to follow Google-recommended
practices to give access to the service account in the web-applications project. What
should you do?
A. Give ג€project ownerג€ for web-applications appropriate roles to crm-databases-proj.
7. A colleague handed over a Google Cloud Platform project for you to maintain. As part
of a security checkup, you want to review who has been granted the Project Owner role.
What should you do?
A. In the console, validate which SSH keys have been stored as project-wide keys.
B. Navigate to Identity-Aware Proxy and check the permissions for these resources.
C. Enable Audit Logs on the IAM & admin page for all resources, and validate the results.
D. Use the command gcloud projects getג€"iamג€"policy to view the current role assignments.
8. You are responsible for a web application on Compute Engine. You want your support
team to be notified automatically if users experience high latency for at least 5 minutes.
You need a Google-recommended solution with no development cost. What should you
do?
A. Export Cloud Monitoring metrics to BigQuery and use a Looker Studio dashboard to monitor
your web application’s latency.
B. Create an alert policy to send a notification when the HTTP response latency exceeds the
specified threshold.
C. Implement an App Engine service which invokes the Cloud Monitoring API and sends a
notification in case of anomalies.
D. Use the Cloud Monitoring dashboard to observe latency and take the necessary actions
when the response latency exceeds the specified threshold.
9. Your company is running a stateless application on a Compute Engine instance. The
application is used heavily during regular business hours and lightly outside of business
hours. Users are reporting that the application is slow during peak hours. You need to
optimize the application's performance. What should you do?
A. Create a snapshot of the existing disk. Create an instance template from the snapshot.
Create an autoscaled managed instance group from the instance template.
B. Create a snapshot of the existing disk. Create a custom image from the snapshot. Create an
autoscaled managed instance group from the custom image.
C. Create a custom image from the existing disk. Create an instance template from the custom
image. Create an autoscaled managed instance group from the instance template.
D. Create an instance template from the existing disk. Create a custom image from the
instance template. Create an autoscaled managed instance group from the custom image.
10. You are running multiple VPC-native Google Kubernetes Engine clusters in the same
subnet. The IPs available for the nodes are exhausted, and you want to ensure that the
clusters can grow in nodes when needed. What should you do?
A. Create a new subnet in the same region as the subnet being used.
C. Create a new VPC, and set up VPC peering with the existing VPC.
D. Expand the CIDR range of the relevant subnet for the cluster.
11. An application generates daily reports in a Compute Engine virtual machine (VM). The
VM is in the project corp-iot-insights. Your team operates only in the project corp-
aggregate-reports and needs a copy of the daily exports in the bucket corp-aggregate-
reports-storage. You want to configure access so that the daily reports from the VM are
available in the bucket corp-aggregate-reports-storage and use as few steps as
possible while following Google-recommended practices. What should you do?
A. Move both projects under the same folder.
B. Grant the VM Service Account the role Storage Object Creator on corp-aggregate-reports-
storage. Most Voted
C. Create a Shared VPC network between both projects. Grant the VM Service Account the role
Storage Object Creator on corp-iot-insights.
12. Your company has a large quantity of unstructured data in different file formats. You
want to perform ETL transformations on the data. You need to make the data
accessible on Google Cloud so it can be processed by a Dataflow job. What should you
do?
A. Upload the data to BigQuery using the bq command line tool.
B. Upload the data to Cloud Storage using the gsutil command line tool.
C. Upload the data into Cloud SQL using the import function in the console.
D. Upload the data into Cloud Spanner using the import function in the console.
13. The core business of your company is to rent out construction equipment at large
scale. All the equipment that is being rented out has been equipped with multiple
sensors that send event information every few seconds. These signals can vary from
engine status, distance traveled, fuel level, and more. Customers are billed based on the
consumption monitored by these sensors. You expect high throughput `" up to
thousands of events per hour per device `" and need to retrieve consistent data based
on the time of the event. Storing and retrieving individual signals should be atomic.
What should you do?
A. Create a file in Cloud Storage per device and append new data to that file.
B. Create a file in Cloud Filestore per device and append new data to that file.
C. Ingest the data into Datastore. Store data in an entity group based on the device.
D. Ingest the data into Cloud Bigtable. Create a row key based on the event timestamp.
14. Your company has multiple projects linked to a single billing account in Google Cloud.
You need to visualize the costs with specific metrics that should be dynamically
calculated based on company-specific criteria. You want to automate the process.
What should you do?
A. In the Google Cloud console, visualize the costs related to the projects in the Reports
section.
B. In the Google Cloud console, visualize the costs related to the projects in the Cost
breakdown section.
C. In the Google Cloud console, use the export functionality of the Cost table. Create a Looker
Studio dashboard on top of the CSV export.
D. Configure Cloud Billing data export to BigQuery for the billing account. Create a Looker
Studio dashboard on top of the BigQuery export.
15. You are configuring Cloud DNS. You want to create DNS records to point
home.mydomain.com, mydomain.com, and www.mydomain.com to the IP address of
your Google Cloud load balancer. What should you do?
A. Create one CNAME record to point mydomain.com to the load balancer, and create two A
records to point WWW and HOME to mydomain.com respectively.
B. Create one CNAME record to point mydomain.com to the load balancer, and create two
AAAA records to point WWW and HOME to mydomain.com respectively.
C. Create one A record to point mydomain.com to the load balancer, and create two CNAME
records to point WWW and HOME to mydomain.com respectively.
D. Create one A record to point mydomain.com to the load balancer, and create two NS records
to point WWW and HOME to mydomain.com respectively.
16. You are building a data lake on Google Cloud for your Internet of Things (IoT)
application. The IoT application has millions of sensors that are constantly streaming
structured and unstructured data to your backend in the cloud. You want to build a
highly available and resilient architecture based on Google-recommended practices.
What should you do?
A. Stream data to Pub/Sub, and use Dataflow to send data to Cloud Storage.
B. Stream data to Pub/Sub, and use Storage Transfer Service to send data to BigQuery.
C. Stream data to Dataflow, and use Dataprep by Trifacta to send data to Bigtable.
D. Stream data to Dataflow, and use Storage Transfer Service to send data to BigQuery.
17. You recently discovered that your developers are using many service account keys
during their development process. While you work on a long term improvement, you
need to quickly implement a process to enforce short-lived service account credentials
in your company. You have the following requirements:
• All service accounts that require a key should be created in a centralized project called pj-sa.
You need a Google-recommended solution that minimizes cost. What should you do?
A. Implement a Cloud Run job to rotate all service account keys periodically in pj-sa. Enforce an
org policy to deny service account key creation with an exception to pj-sa.
B. Implement a Kubernetes CronJob to rotate all service account keys periodically. Disable
attachment of service accounts to resources in all projects with an exception to pj-sa.
C. Enforce an org policy constraint allowing the lifetime of service account keys to be 24 hours.
Enforce an org policy constraint denying service account key creation with an exception on pj-
sa.
D. Enforce a DENY org policy constraint over the lifetime of service account keys for 24 hours.
Disable attachment of service accounts to resources in all projects with an exception to pj-sa.
18. You have a Bigtable instance that consists of three nodes that store personally
identifiable information (PII) data. You need to log all read or write operations, including
any metadata or configuration reads of this database table, in your company’s Security
Information and Event Management (SIEM) system. What should you do?
A. • Navigate to Cloud Monitoring in the Google Cloud console, and create a custom monitoring
job for the Bigtable instance to track all changes.
• Create an alert by using webhook endpoints, with the SIEM endpoint as a receiver.
B. • Navigate to the Audit Logs page in the Google Cloud console, and enable Admin Write logs
for the Bigtable instance.
• Create a Cloud Functions instance to export logs from Cloud Logging to your SIEM.
C. • Navigate to the Audit Logs page in the Google Cloud console, and enable Data Read, Data
Write and Admin Read logs for the Bigtable instance.
• Create a Pub/Sub topic as a Cloud Logging sink destination, and add your SIEM as a
subscriber to the topic.
• Create a service account with read permissions for the Bigtable instance.
• Create a custom Dataflow job with this service account to export logs to the company’s SIEM
system.
19. Your manager asks you to deploy a workload to a Kubernetes cluster. You are not sure
of the workload's resource requirements or how the requirements might vary depending
on usage patterns, external dependencies, or other factors. You need a solution that
makes cost-effective recommendations regarding CPU and memory requirements, and
allows the workload to function consistently in any situation. You want to follow
Google-recommended practices. What should you do?
A. Configure the Horizontal Pod Autoscaler for availability, and configure the cluster autoscaler
for suggestions.
B. Configure the Horizontal Pod Autoscaler for availability, and configure the Vertical Pod
Autoscaler recommendations for suggestions.
C. Configure the Vertical Pod Autoscaler recommendations for availability, and configure the
Cluster autoscaler for suggestions.
D. Configure the Vertical Pod Autoscaler recommendations for availability, and configure the
Horizontal Pod Autoscaler for suggestions.
20. You need to manage multiple Google Cloud projects in the fewest steps possible. You
want to configure the Google Cloud SDK command line interface (CLI) so that you can
easily manage multiple projects. What should you do?
A. 1. Create a configuration for each project you need to manage. 2. Activate the appropriate
configuration when you work with each of your assigned Google Cloud projects.
B. 1. Create a configuration for each project you need to manage. 2. Use gcloud init to update
the configuration values when you need to work with a non-default project
C. 1. Use the default configuration for one project you need to manage. 2. Activate the
appropriate configuration when you work with each of your assigned Google Cloud projects.
D. 1. Use the default configuration for one project you need to manage. 2. Use gcloud init to
update the configuration values when you need to work with a non-default project.
21. Your team wants to deploy a specific content management system (CMS) solution to
Google Cloud. You need a quick and easy way to deploy and install the solution. What
should you do?
A. Search for the CMS solution in Google Cloud Marketplace. Use gcloud CLI to deploy the
solution.
B. Search for the CMS solution in Google Cloud Marketplace. Deploy the solution directly from
Cloud Marketplace.
C. Search for the CMS solution in Google Cloud Marketplace. Use Terraform and the Cloud
Marketplace ID to deploy the solution with the appropriate parameters.
D. Use the installation guide of the CMS provider. Perform the installation through your
configuration management system.
22. You have a set of applications running on a Google Kubernetes Engine (GKE) cluster,
and you are using Stackdriver Kubernetes Engine Monitoring. You are bringing a new
containerized application required by your company into production. This application is
written by a third party and cannot be modified or reconfigured. The application writes
its log information to /var/log/app_messages.log, and you want to send these log
entries to Stackdriver Logging. What should you do?
A. Use the default Stackdriver Kubernetes Engine Monitoring agent configuration.
B. Deploy a Fluent daemonset to GKE. Then create a customised input and output configuration
to tail the log file in the application's pods and write to Stackdriver Logging.
C. Install Kubernetes on Google Compute Engine (GCE) and redeploy your applications. Then
customize the built-in Stackdriver Logging configuration to tail the log file in the application's
pods and write to Stackdriver Logging.
D. Write a script to tail the log file within the pod and write entries to standard output. Run the
script as a sidecar container with the application's pod. Configure a shared volume between the
containers to allow the script to have read access to /var/log in the application container.
23. You are building a multi-player gaming application that will store game information in a
database. As the popularity of the application increases, you are concerned about
delivering consistent performance. You need to ensure an optimal gaming performance
for global users, without increasing the management complexity. What should you do?
A. Use Cloud SQL database with cross-region replication to store game statistics in the EU, US,
and APAC regions.
B. Use Cloud Spanner to store user data mapped to the game statistics.
C. Use BigQuery to store game statistics with a Redis on Memorystore instance in the front to
provide global consistency.
A. Use kubectl to create the label deleted-by-cnrm and to change its value to true for the topic
resource.
25. You just installed the Google Cloud CLI on your new corporate laptop. You need to list
the existing instances of your company on Google Cloud. What must you do before you
run the gcloud compute instances list command? (Choose two.)
A. Run gcloud auth login, enter your login credentials in the dialog window, and paste the
received login token to gcloud CLI.
B. Create a Google Cloud service account, and download the service account key. Place the key
file in a folder on your machine where gcloud CLI can find it.
C. Download your Cloud Identity user account key. Place the key file in a folder on your
machine where gcloud CLI can find it.
D. Run gcloud config set compute/zone $my_zone to set the default zone for gcloud CLI.
E. Run gcloud config set project $my_project to set the default project for gcloud CLI.
26. You are migrating a business critical application from your local data center into Google
Cloud. As part of your high-availability strategy, you want to ensure that any data used
by the application will be immediately available if a zonal failure occurs. What should
you do?
A. Store the application data on a zonal persistent disk. Create a snapshot schedule for the
disk. If an outage occurs, create a new disk from the most recent snapshot and attach it to a
new VM in another zone.
B. Store the application data on a zonal persistent disk. If an outage occurs, create an instance
in another zone with this disk attached.
C. Store the application data on a regional persistent disk. Create a snapshot schedule for the
disk. If an outage occurs, create a new disk from the most recent snapshot and attach it to a
new VM in another zone.
D. Store the application data on a regional persistent disk. If an outage occurs, create an
instance in another zone with this disk attached.
27. You need to extract text from audio files by using the Speech-to-Text API. The audio
files are pushed to a Cloud Storage bucket. You need to implement a fully managed,
serverless compute solution that requires authentication and aligns with Google-
recommended practices. You want to automate the call to the API by submitting each
file to the API as the audio file arrives in the bucket. What should you do?
A. Create an App Engine standard environment triggered by Cloud Storage bucket events to
submit the file URI to the Google Speech-to-TextAPI.
B. Run a Kubernetes job to scan the bucket regularly for incoming files, and call the Speech-to-
Text API for each unprocessed file.
C. Run a Python script by using a Linux cron job in Compute Engine to scan the bucket regularly
for incoming files, and call the Speech-to-Text API for each unprocessed file.
D. Create a Cloud Function triggered by Cloud Storage bucket events to submit the file URI to
the Google Speech-to-Text API.
28. You recently received a new Google Cloud project with an attached billing account
where you will work. You need to create instances, set firewalls, and store data in Cloud
Storage. You want to follow Google-recommended practices. What should you do?
C. Open the Google Cloud console and enable all Google Cloud APIs from the API dashboard.
D. Open the Google Cloud console and run gcloud init --project in a Cloud Shell.
29. You need to manage a third-party application that will run on a Compute Engine
instance. Other Compute Engine instances are already running with default
configuration. Application installation files are hosted on Cloud Storage. You need to
access these files from the new instance without allowing other virtual machines (VMs)
to access these files. What should you do?
A. Create the instance with the default Compute Engine service account. Grant the service
account permissions on Cloud Storage.
B. Create the instance with the default Compute Engine service account. Add metadata to the
objects on Cloud Storage that matches the metadata on the new instance.
C. Create a new service account and assign this service account to the new instance. Grant the
service account permissions on Cloud Storage.
D. Create a new service account and assign this service account to the new instance. Add
metadata to the objects on Cloud Storage that matches the metadata on the new instance.
30. Your company's security vulnerability management policy wants a member of the
security team to have visibility into vulnerabilities and other OS metadata for a specific
Compute Engine instance. This Compute Engine instance hosts a critical application in
your Google Cloud project. You need to implement your company's security vulnerability
management policy. What should you do?
A. • Ensure that the Ops Agent is installed on the Compute Engine instance.
B. • Ensure that the Ops Agent is installed on the Compute Engine instance.
C. • Ensure that the OS Config agent is installed on the Compute Engine instance.
D. • Ensure that the OS Config agent is installed on the Compute Engine instance.
31. You have downloaded and installed the gcloud command line interface (CLI) and have
authenticated with your Google Account. Most of your Compute Engine instances in
your project run in the europe-west1-d zone. You want to avoid having to specify this
zone with each CLI command when managing these instances.
What should you do?
A. Set the europe-west1-d zone as the default zone using the gcloud config subcommand.
B. In the Settings page for Compute Engine under Default location, set the zone to
europeג€"west1-d.
D. Create a Metadata entry on the Compute Engine page with key compute/zone and value
europeג€"west1ג€"d.
32. You need to migrate invoice documents stored on-premises to Cloud Storage. The
documents have the following storage requirements:
• Up to five revisions of the same invoice document must be stored, to allow for corrections.
• Documents older than 365 days should be moved to lower cost storage tiers.
A. Enable retention policies on the bucket, and use Cloud Scheduler to invoke a Cloud Function
to move or delete your documents based on their metadata.
B. Enable retention policies on the bucket, use lifecycle rules to change the storage classes of
the objects, set the number of versions, and delete old files.
C. Enable object versioning on the bucket, and use Cloud Scheduler to invoke a Cloud Functions
instance to move or delete your documents based on their metadata.
D. Enable object versioning on the bucket, use lifecycle conditions to change the storage class
of the objects, set the number of versions, and delete old files.
33. Your web application has been running successfully on Cloud Run for Anthos. You want
to evaluate an updated version of the application with a specific percentage of your
production users (canary deployment). What should you do?
A. Create a new service with the new version of the application. Split traffic between this
version and the version that is currently running.
B. Create a new revision with the new version of the application. Split traffic between this
version and the version that is currently running.
C. Create a new service with the new version of the application. Add an HTTP Load Balancer in
front of both services.
D. Create a new revision with the new version of the application. Add an HTTP Load Balancer in
front of both revisions.
34. You used the gcloud container clusters command to create two Google Cloud
Kubernetes (GKE) clusters: prod-cluster and dev-cluster.
kubectl cluster-info
kubectl cluster-info
35. Your continuous integration and delivery (CI/CD) server can’t execute Google Cloud
actions in a specific project because of permission issues. You need to validate
whether the used service account has the appropriate roles in the specific project.
A. Open the Google Cloud console, and check the Identity and Access Management (IAM) roles
assigned to the service account at the project or inherited from the folder or organization
levels.
B. Open the Google Cloud console, and check the organization policies.
C. Open the Google Cloud console, and run a query to determine which resources this service
account can access.
D. Open the Google Cloud console, and run a query of the audit logs to find permission denied
errors for this service account.
36. Your team is running an on-premises ecommerce application. The application contains
a complex set of microservices written in Python, and each microservice is running on
Docker containers. Configurations are injected by using environment variables. You
need to deploy your current application to a serverless Google Cloud cloud solution.
What should you do?
A. Use your existing CI/CD pipeline. Use the generated Docker images and deploy them to
Cloud Run. Update the configurations and the required endpoints.
B. Use your existing continuous integration and delivery (CI/CD) pipeline. Use the generated
Docker images and deploy them to Cloud Function. Use the same configuration as on-
premises.
C. Use the existing codebase and deploy each service as a separate Cloud Function. Update the
configurations and the required endpoints.
D. Use your existing codebase and deploy each service as a separate Cloud Run. Use the same
configurations as on-premises.
37. You are working for a startup that was officially registered as a business 6 months ago.
As your customer base grows, your use of Google Cloud increases. You want to allow
all engineers to create new projects without asking them for their credit card
information. What should you do?
A. Create a Billing account, associate a payment method with it, and provide all project creators
with permission to associate that billing account with their projects.
B. Grant all engineers permission to create their own billing accounts for each new project.
C. Apply for monthly invoiced billing, and have a single invoice for the project paid by the
finance team.
D. Create a billing account, associate it with a monthly purchase order (PO), and send the PO to
Google Cloud.
38. You are in charge of provisioning access for all Google Cloud users in your
organization. Your company recently acquired a startup company that has their own
Google Cloud organization. You need to ensure that your Site Reliability Engineers
(SREs) have the same project permissions in the startup company's organization as in
your own organization. What should you do?
A. In the Google Cloud console for your organization, select Create role from selection, and
choose destination as the startup company's organization.
B. In the Google Cloud console for the startup company, select Create role from selection and
choose source as the startup company's Google Cloud organization.
C. Use the gcloud iam roles copy command, and provide the Organization ID of the startup
company's Google Cloud Organization as the destination.
D. Use the gcloud iam roles copy command, and provide the project IDs of all projects in the
startup company's organization as the destination.
39. You need to deploy an application, which is packaged in a container image, in a new
project. The application exposes an HTTP endpoint and receives very few requests per
day. You want to minimize costs. What should you do?
A. Deploy the container on Cloud Run.
40. You are working in a team that has developed a new application that needs to be
deployed on Kubernetes. The production application is business critical and should be
optimized for reliability. You need to provision a Kubernetes cluster and want to follow
Google-recommended practices. What should you do?
A. Create a GKE Autopilot cluster. Enroll the cluster in the rapid release channel.
B. Create a GKE Autopilot cluster. Enroll the cluster in the stable release channel.
C. Create a zonal GKE standard cluster. Enroll the cluster in the stable release channel.
D. Create a regional GKE standard cluster. Enroll the cluster in the rapid release channel.
41. You are developing a new web application that will be deployed on Google Cloud
Platform. As part of your release cycle, you want to test updates to your application on
a small portion of real user traffic. The majority of the users should still be directed
towards a stable version of your application. What should you do?
A. Deploy the application on App Engine. For each update, create a new version of the same
service. Configure traffic splitting to send a small percentage of traffic to the new version.
B. Deploy the application on App Engine. For each update, create a new service. Configure
traffic splitting to send a small percentage of traffic to the new service.
C. Deploy the application on Kubernetes Engine. For a new release, update the deployment to
use the new version.
D. Deploy the application on Kubernetes Engine. For a new release, create a new deployment
for the new version. Update the service to use the new deployment.
42. You have two subnets (subnet-a and subnet-b) in the default VPC. Your database
servers are running in subnet-a. Your application servers and web servers are running in
subnet-b. You want to configure a firewall rule that only allows database traffic from
the application servers to the database servers. What should you do?
• Associate service account sa-app with the application servers and the service account sa-db
with the database servers.
• Create an ingress firewall rule to allow network traffic from source service account sa-app to
target service account sa-db.
• Add the app-server tag to the application servers and the db-server tag to the database
servers.
• Create an egress firewall rule to allow network traffic from source network tag app-server to
target network tag db-server.
• Associate the service account sa-app with the application servers and the network tag db-
server with the database servers.
• Create an ingress firewall rule to allow network traffic from source VPC IP addresses and
target the subnet-a IP addresses.
• Add the tag to the application servers and associate the service account with the database
servers.
• Create an egress firewall rule to allow network traffic from source network tag app-server to
target service account sa-db.
43. Your customer wants you to create a secure website with autoscaling based on the
compute instance CPU load. You want to enhance performance by storing static
content in Cloud Storage. Which resources are needed to distribute the user traffic?
A. An external HTTP(S) load balancer with a managed SSL certificate to distribute the load and
a URL map to target the requests for the static content to the Cloud Storage backend.
B. An external network load balancer pointing to the backend instances to distribute the load
evenly. The web servers will forward the request to the Cloud Storage as needed.
C. An internal HTTP(S) load balancer together with Identity-Aware Proxy to allow only HTTPS
traffic.
D. An external HTTP(S) load balancer to distribute the load and a URL map to target the
requests for the static content to the Cloud Storage backend. Install the HTTPS certificates on
the instance.
A. Instruct the external consultant to use the gcloud compute ssh command line tool by using
Identity-Aware Proxy to access the instance.
B. Instruct the external consultant to use the gcloud compute ssh command line tool by using
the public IP address of the instance to access it.
C. Instruct the external consultant to generate an SSH key pair, and request the public key from
the consultant. Add the public key to the instance yourself, and have the consultant access the
instance through SSH with their private key.
D. Instruct the external consultant to generate an SSH key pair, and request the private key
from the consultant. Add the private key to the instance yourself, and have the consultant
access the instance through SSH with their public key.
45. You are running a web application on Cloud Run for a few hundred users. Some of your
users complain that the initial web page of the application takes much longer to load
than the following pages. You want to follow Google’s recommendations to mitigate the
issue. What should you do?
A. Set the minimum number of instances for your Cloud Run service to 3.
C. Set the maximum number of instances for your Cloud Run service to 100.
D. Update your web application to use the protocol HTTP/2 instead of HTTP/1.1.
A. Create a cluster with a single node-pool by using standard VMs. Label the fault-tolerant
Deployments as spot_true.
B. Create a cluster with a single node-pool by using Spot VMs. Label the critical Deployments
as spot_false.
C. Create a cluster with both a Spot VM node pool and a node pool by using standard VMs.
Deploy the critical deployments on the Spot VM node pool and the fault-tolerant deployments
on the node pool by using standard VMs.
D. Create a cluster with both a Spot VM node pool and a nods pool by using standard VMs.
Deploy the critical deployments on the node pool by using standard VMs and the fault-tolerant
deployments on the Spot VM node pool.
47. After a recent security incident, your startup company wants better insight into what is
happening in the Google Cloud environment. You need to monitor unexpected firewall
changes and instance creation. Your company prefers simple solutions. What should
you do?
A. Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to
Cloud Storage. Use BigQuery to periodically analyze log events in the storage bucket.
B. Use Cloud Logging filters to create log-based metrics for firewall and instance actions.
Monitor the changes and set up reasonable alerts.
C. Install Kibana on a compute instance. Create a log sink to forward Cloud Audit Logs filtered
for firewalls and compute instances to Pub/Sub. Target the Pub/Sub topic to push messages
to the Kibana instance. Analyze the logs on Kibana in real time.
D. Turn on Google Cloud firewall rules logging, and set up alerts for any insert, update, or delete
events.
48. You have designed a solution on Google Cloud that uses multiple Google Cloud
products. Your company has asked you to estimate the costs of the solution. You need
to provide estimates for the monthly total cost. What should you do?
A. For each Google Cloud product in the solution, review the pricing details on the products
pricing page. Use the pricing calculator to total the monthly costs for each Google Cloud
product.
B. For each Google Cloud product in the solution, review the pricing details on the products
pricing page. Create a Google Sheet that summarizes the expected monthly costs for each
product.
C. Provision the solution on Google Cloud. Leave the solution provisioned for 1 week. Navigate
to the Billing Report page in the Cloud Console. Multiply the 1 week cost to determine the
monthly costs.
D. Provision the solution on Google Cloud. Leave the solution provisioned for 1 week. Use
Cloud Monitoring to determine the provisioned and used resource amounts. Multiply the 1 week
cost to determine the monthly costs.
49. During a recent audit of your existing Google Cloud resources, you discovered several
users with email addresses outside of your Google Workspace domain. You want to
ensure that your resources are only shared with users whose email addresses match
your domain. You need to remove any mismatched users, and you want to avoid having
to audit your resources to identify mismatched users. What should you do?
A. Create a Cloud Scheduler task to regularly scan your projects and delete mismatched users.
B. Create a Cloud Scheduler task to regularly scan your resources and delete mismatched
users.