Associate Cloud Engineer Exam Valid Dumps Questions
Associate Cloud Engineer Exam Valid Dumps Questions
exam dumps questions are the best material for you to test all the related Google
exam topics. By using the Associate Cloud Engineer exam dumps questions and
practicing your skills, you can increase your confidence and chances of passing
the Associate Cloud Engineer exam.
Instant Download
Free Update in 3 Months
Money back guarantee
PDF and Software
24/7 Customer Support
Besides, Dumpsinfo also provides unlimited access. You can get all
Dumpsinfo files at lowest price.
1. Create an Alerting Policy in Stackdriver that uses the threshold as a trigger condition.
2.You need to manage multiple Google Cloud Platform (GCP) projects in the fewest steps possible.
You want to configure the Google Cloud SDK command line interface (CLI) so that you can easily
manage
multiple GCP projects.
What should you?
A. 1. Create a configuration for each project you need to manage.
6.During a recent audit of your existing Google Cloud resources, you discovered several users with
email addresses outside of your Google Workspace domain.
You want to ensure that your resources are only shared with users whose email addresses match
your domain. You need to remove any mismatched users, and you want to avoid having to audit your
resources to identify mismatched users.
What should you do?
A. Create a Cloud Scheduler task to regularly scan your projects and delete mismatched users.
B. Create a Cloud Scheduler task to regularly scan your resources and delete mismatched users.
C. Set an organizational policy constraint to limit identities by domain to automatically remove
mismatched users.
D. Set an organizational policy constraint to limit identities by domain, and then retroactively remove
the existing mismatched users.
Answer: D
Explanation:
https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints This list
constraint defines the set of domains that email addresses added to Essential Contacts can have. By
default, email addresses with any domain can be added to Essential Contacts. The allowed/denied
list must specify one or more domains of the form @example.com. If this constraint is active and
configured with allowed values, only email addresses with a suffix matching one of the entries from
the list of allowed domains can be added in Essential Contacts. This constraint has no effect on
updating or removing existing contacts. constraints/essentialcontacts.allowedContactDomains
7. Choose BigQuery as Sink Service, and the platform-logs dataset as Sink Destination.
9.You have a number of compute instances belonging to an unmanaged instances group. You need
to SSH to one of the Compute Engine instances to run an ad hoc script. You’ve already
authenticated
gcloud, however, you don’t have an SSH key deployed yet.
In the fewest steps possible, what’s the easiest way to SSH to the instance?
A. Run gcloud compute instances list to get the IP address of the instance, then use the ssh
command.
B. Use the gcloud compute ssh command.
C. Create a key with the ssh-keygen command. Then use the gcloud compute ssh command.
D. Create a key with the ssh-keygen command. Upload the key to the instance. Run gcloud compute
instances list to get the IP address of the instance, then use the ssh command.
Answer: B
Explanation:
gcloud compute ssh ensures that the user’s public SSH key is present in the project’s metadata. If
the user does not have a public SSH key, one is generated using ssh-keygen and added to the
project’s metadata. This is similar to the other option where we copy the key explicitly to the project’s
metadata but here it is done automatically for us. There are also security benefits with this approach.
When we use gcloud compute ssh to connect to Linux instances, we are adding a layer of security by
storing your host keys as guest attributes. Storing SSH host keys as guest attributes improve the
security of your connections by helping to protect against vulnerabilities such as man-in-the-middle
(MITM) attacks. On the initial boot of a VM instance, if guest attributes are enabled, Compute Engine
stores your generated host keys as guest attributes.
Compute Engine then uses these host keys that were stored during the initial boot to verify all
subsequent connections to the VM instance.
Ref: https://cloud.google.com/compute/docs/instances/connecting-to-instance
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/ssh
10. Workloads that aren’t a good fit for the predefined machine types that are available to you.
11. Create a Cloud Pub/Sub subscription that uses that service account and uses your Cloud Run
application as the push endpoint.
12.You are building a new version of an application hosted in an App Engine environment. You want
to test the new version with 1% of users before you completely switch your application over to the
new version.
What should you do?
A. Deploy a new version of your application in Google Kubernetes Engine instead of App Engine and
then use GCP Console to split traffic.
B. Deploy a new version of your application in a Compute Engine instance instead of App Engine and
then use GCP Console to split traffic.
C. Deploy a new version as a separate app in App Engine. Then configure App Engine using GCP
Console to split traffic between the two apps.
D. Deploy a new version of your application in App Engine. Then go to App Engine settings in GCP
Console and split traffic between the current version and newly deployed versions accordingly.
Answer: D
Explanation:
GCP App Engine natively offers traffic splitting functionality between versions. You can use traffic
splitting to specify a percentage distribution of traffic across two or more of the versions within a
service. Splitting traffic allows you to conduct A/B testing between your versions and provides control
over the pace when rolling out features.
Ref: https://cloud.google.com/appengine/docs/standard/python/splitting-traffic
13.You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This
specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and
need an additional 2 GB of memory for the rest of the processes. You want to minimize cost.
How should you run this reverse proxy?
A. Create a Cloud Memorystore for Redis instance with 32-GB capacity.
B. Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of
memory.
C. Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances
as nodes.
D. Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent
disk of 32 GB.
Answer: A
Explanation:
What is Google Cloud Memorystore?
Overview. Cloud Memorystore for Redis is a fully managed Redis service for Google Cloud Platform.
Applications running on Google Cloud Platform can achieve extreme performance by leveraging the
highly scalable, highly available, and secure Redis service without the burden of managing complex
Redis deployments.
14. Create the new instance in the new subnetwork and use the first instance's private address as the
endpoint.
B. 1. Create a VPC and a subnetwork in europe-west1.
15. Choose BigQuery as Sink Service, and the platform-logs dataset as Sink Destination.
D. 1. Create a Cloud Function that has the BigQuery User role on the platform-logs dataset.
16.You are responsible for a web application on Compute Engine. You want your support team to be
notified automatically if users experience high latency for at least 5 minutes. You need a Google-
recommended solution with no development cost.
What should you do?
A. Create an alert policy to send a notification when the HTTP response latency exceeds the
specified threshold.
B. Implement an App Engine service which invokes the Cloud Monitoring API and sends a notification
in case of anomalies.
C. Use the Cloud Monitoring dashboard to observe latency and take the necessary actions when the
response latency exceeds the specified threshold.
D. Export Cloud Monitoring metrics to BigQuery and use a Looker Studio dashboard to monitor your
web applications latency.
Answer: A
Explanation:
https://cloud.google.com/monitoring/alerts#alerting-example
17.You need to host an application on a Compute Engine instance in a project shared with other
teams. You want to prevent the other teams from accidentally causing downtime on that application.
Which feature should you use?
A. Use a Shielded VM.
B. Use a Preemptible VM.
C. Use a sole-tenant node.
D. Enable deletion protection on the instance.
Answer: D
Explanation:
As part of your workload, there might be certain VM instances that are critical to running your
application or services, such as an instance running a SQL server, a server used as a license
manager, and so on. These VM instances might need to stay running indefinitely so you need a way
to protect these VMs from being deleted. By setting the deletionProtection flag, a VM instance can be
protected from accidental deletion. If a user attempts to delete a VM instance for which you have set
the deletionProtection flag, the request fails. Only a user that has been granted a role with
compute.instances.create permission can reset the flag to allow the resource to be deleted.
Ref: https://cloud.google.com/compute/docs/instances/preventing-accidental-vm-deletion
19.You are migrating your on-premises workload to Google Cloud. Your company is implementing its
Cloud Billing configuration and requires access to a granular breakdown of its Google Cloud costs.
You need to ensure that the Cloud Billing datasets are available in BigQuery so you can conduct a
detailed analysis of costs.
What should you do?
A. Enable the BigQuery API and ensure that the BigQuery User IAM role is selected. Change the
BigQuery dataset to select a data location.
B. Create a Cloud Billing account. Enable the BigQuery Data Transfer Service API to export pricing
data.
C. Enable Cloud Billing data export to BigQuery when you create a Cloud Billing account.
D. Enable Cloud Billing on the project and link a Cloud Billing account. Then view the billing data table
in the BigQuery dataset.
Answer: C
Explanation:
The most direct and recommended way to get a granular breakdown of your Google Cloud costs in
BigQuery is to enable Cloud Billing data export to BigQuery when you create or manage your Cloud
Billing account. This automatically sets up a daily export of your billing data to a BigQuery dataset you
specify.
Option A: Enabling the BigQuery API and managing IAM roles are necessary for interacting with
BigQuery, but they don't automatically populate it with Cloud Billing data. Selecting a data location is
also important for BigQuery datasets but is a separate step from enabling billing export.
Option B: The BigQuery Data Transfer Service is used for transferring data from various sources into
BigQuery, but for Cloud Billing data, the direct export feature is the standard and simpler method.
Option D: Enabling Cloud Billing and linking an account makes billing data available in the Cloud
Billing console, but it doesn't automatically export it to BigQuery for detailed analysis. You need to
explicitly configure the BigQuery export.
Reference to Google Cloud Certified - Associate Cloud Engineer Documents:
The process of setting up Cloud Billing export to BigQuery is clearly documented in the Google Cloud
Billing documentation, which is a fundamental area for the Associate Cloud Engineer certification.
Understanding how to access and analyze billing data is crucial for cost management.
20.You are the project owner of a GCP project and want to delegate control to colleagues to manage
buckets and files in Cloud Storage. You want to follow Google-recommended practices.
Which IAM roles should you grant your colleagues?
A. Project Editor
B. Storage Admin
C. Storage Object Admin
D. Storage Object Creator
Answer: B
Explanation:
Storage Admin (roles/storage.admin) Grants full control of buckets and objects.
When applied to an individual bucket, control applies only to the specified bucket and objects within
the bucket.
firebase.projects.get
resourcemanager.projects.get
resourcemanager.projects.list
storage.buckets.*
storage.objects.*
https://cloud.google.com/storage/docs/access-control/iam-roles
This role grants full control of buckets and objects. When applied to an individual bucket, control
applies only to the specified bucket and objects within the bucket.
Ref: https://cloud.google.com/iam/docs/understanding-roles#storage-roles
21.You need a dynamic way of provisioning VMs on Compute Engine. The exact specifications will be
in a dedicated configuration file. You want to follow Google’s recommended practices.
Which method should you use?
A. Deployment Manager
B. Cloud Composer
C. Managed Instance Group
D. Unmanaged Instance Group
Answer: A
Explanation:
https://cloud.google.com/deployment-manager/docs/configuration/create-basic-configuration
22.You are in charge of provisioning access for all Google Cloud users in your organization. Your
company recently acquired a startup company that has their own Google Cloud organization. You
need to ensure that your Site Reliability Engineers (SREs) have the same project permissions in the
startup company's organization as in your own organization.
What should you do?
A. In the Google Cloud console for your organization, select Create role from selection, and choose
destination as the startup company's organization
B. In the Google Cloud console for the startup company, select Create role from selection and choose
source as the startup company's Google Cloud organization.
C. Use the gcloud iam roles copy command, and provide the Organization ID of the startup
company's
Google Cloud Organization as the destination.
D. Use the gcloud iam roles copy command, and provide the project IDs of all projects in the startup
company s organization as the destination.
Answer: D
Explanation:
https://cloud.google.com/architecture/best-practices-vpc-design#shared-service Cloud VPN is another
alternative. Because Cloud VPN establishes reachability through managed IPsec tunnels, it doesn't
have the aggregate limits of VPC Network Peering. Cloud VPN uses a VPN Gateway for connectivity
and doesn't consider the aggregate resource use of the IPsec peer. The drawbacks of Cloud VPN
include increased costs (VPN tunnels and traffic egress), management overhead required to maintain
tunnels, and the performance overhead of IPsec.
23.You are performing a monthly security check of your Google Cloud environment and want to know
who has access to view data stored in your Google Cloud Project.
What should you do?
A. Enable Audit Logs for all APIs that are related to data storage.
B. Review the IAM permissions for any role that allows for data access.
C. Review the Identity-Aware Proxy settings for each resource.
D. Create a Data Loss Prevention job.
Answer: B
Explanation:
https://cloud.google.com/logging/docs/audit
25.You need to create a custom IAM role for use with a GCP service. All permissions in the role must
be suitable for production use. You also want to clearly share with your organization the status of the
custom role. This will be the first version of the custom role.
What should you do?
A. Use permissions in your role that use the ‘supported’ support level for role permissions. Set the
role stage to ALPHA while testing the role permissions.
B. Use permissions in your role that use the ‘supported’ support level for role permissions. Set the
role stage to BETA while testing the role permissions.
C. Use permissions in your role that use the ‘testing’ support level for role permissions. Set the role
stage to ALPHA while testing the role permissions.
D. Use permissions in your role that use the ‘testing’ support level for role permissions. Set the role
stage to BETA while testing the role permissions.
Answer: A
Explanation:
When setting support levels for permissions in custom roles, you can set to one of SUPPORTED,
TESTING or NOT_SUPPORTED.
Ref: https://cloud.google.com/iam/docs/custom-roles-permissions-support
26.Your company runs its Linux workloads on Compute Engine instances. Your company will be
working with a new operations partner that does not use Google Accounts. You need to grant access
to the instances to your operations partner so they can maintain the installed tooling.
What should you do?
A. Enable Cloud IAP for the Compute Engine instances, and add the operations partner as a Cloud
IAP Tunnel User.
B. Tag all the instances with the same network tag. Create a firewall rule in the VPC to grant TCP
access on port 22 for traffic from the operations partner to instances with the network tag.
C. Set up Cloud VPN between your Google Cloud VPC and the internal network of the operations
partner.
D. Ask the operations partner to generate SSH key pairs, and add the public keys to the VM
instances.
Answer: D
Explanation:
IAP controls access to your App Engine apps and Compute Engine VMs running on Google Cloud. It
leverages user identity and the context of a request to determine if a user should be allowed access.
IAP is a building block toward BeyondCorp, an enterprise security model that enables employees to
work from untrusted networks without using a VPN.
By default, IAP uses Google identities and IAM. By leveraging Identity Platform instead, you can
authenticate users with a wide range of external identity providers, such as:
Email/password
OAuth (Google, Facebook, Twitter, GitHub, Microsoft, etc.)
SAML
OIDC
Phone number
Custom
Anonymous
This is useful if your application is already using an external authentication system, and migrating
your users to Google accounts is impractical.
https://cloud.google.com/iap/docs/using-tcp-forwarding#grant-permission
27.Your company is modernizing its applications and refactoring them to containerized microservices.
You need to deploy the infrastructure on Google Cloud so that teams can deploy their applications.
The applications cannot be exposed publicly. You want to minimize management and operational
overhead.
What should you do?
A. Provision a Standard zonal Google Kubernetes Engine (GKE) cluster.
B. Provision a fleet of Compute Engine instances and install Kubernetes.
C. Provision a Google Kubernetes Engine (GKE) Autopilot cluster.
D. Provision a Standard regional Google Kubernetes Engine (GKE) cluster.
Answer: C
Explanation:
GKE Autopilot is a mode of operation in GKE where Google manages the underlying infrastructure,
including nodes, node pools, and their upgrades. This significantly reduces the management and
operational overhead for the user, allowing teams to focus solely on deploying and managing their
containerized applications. Since the applications are not exposed publicly, the zonal or regional
nature of the cluster primarily impacts availability within Google Cloud, and Autopilot is available for
both. Autopilot minimizes the operational burden, which is a key requirement.
Option A: A Standard zonal GKE cluster requires you to manage the nodes yourself, including sizing,
scaling, and upgrades, increasing operational overhead compared to Autopilot.
Option B: Manually installing and managing Kubernetes on a fleet of Compute Engine instances
involves the highest level of management overhead, which contradicts the requirement to minimize it.
Option D: A Standard regional GKE cluster provides higher availability than a zonal cluster by
replicating the control plane and nodes across multiple zones within a region. However, it still requires
you to manage the underlying nodes, unlike Autopilot.
Reference to Google Cloud Certified - Associate Cloud Engineer Documents:
The different modes of GKE operation, including Standard and Autopilot, and their respective
management responsibilities and benefits, are clearly outlined in the Google Kubernetes Engine
documentation, a core topic for the Associate Cloud Engineer certification. The emphasis on reduced
operational overhead with Autopilot is a key differentiator.
28.After a recent security incident, your startup company wants better insight into what is happening
in the Google Cloud environment. You need to monitor unexpected firewall changes and instance
creation. Your company prefers simple solutions.
What should you do?
A. Use Cloud Logging filters to create log-based metrics for firewall and instance actions. Monitor the
changes and set up reasonable alerts.
B. Install Kibana on a compute Instance. Create a log sink to forward Cloud Audit Logs filtered for
firewalls and compute instances to Pub/Sub. Target the Pub/Sub topic to push messages to the
Kibana instance.
Analyze the logs on Kibana in real time.
C. Turn on Google Cloud firewall rules logging, and set up alerts for any insert, update, or delete
events.
D. Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Cloud
Storage.
Use BigQuery to periodically analyze log events in the storage bucket.
Answer: A
29.You are configuring Cloud DNS. You want !to create DNS records to point home.mydomain.com,
mydomain.com. and www.mydomain.com to the IP address of your Google Cloud load balancer.
What should you do?
A. Create one CNAME record to point mydomain.com to the load balancer, and create two A records
to point WWW and HOME lo mydomain.com respectively.
B. Create one CNAME record to point mydomain.com to the load balancer, and create two AAAA
records to point WWW and HOME to mydomain.com respectively.
C. Create one A record to point mydomain.com to the load balancer, and create two CNAME records
to point WWW and HOME to mydomain.com respectively.
D. Create one A record to point mydomain.com lo the load balancer, and create two NS records to
point WWW and HOME to mydomain.com respectively.
Answer: C
30.You have developed a containerized web application that will serve Internal colleagues during
business hours. You want to ensure that no costs are incurred outside of the hours the application is
used. You have just created a new Google Cloud project and want to deploy the application.
What should you do?
A. Deploy the container on Cloud Run for Anthos, and set the minimum number of instances to zero
B. Deploy the container on Cloud Run (fully managed), and set the minimum number of instances to
zero.
C. Deploy the container on App Engine flexible environment with autoscaling. and set the value
min_instances to zero in the app yaml
D. Deploy the container on App Engine flexible environment with manual scaling, and set the value
instances to zero in the app yaml
Answer: B
Explanation:
https://cloud.google.com/kuberun/docs/architecture-overview#components_in_the_default_installation
31.You are using Deployment Manager to create a Google Kubernetes Engine cluster. Using the
same Deployment Manager deployment, you also want to create a DaemonSet in the kube-system
namespace of the cluster. You want a solution that uses the fewest possible services.
What should you do?
A. Add the cluster’s API as a new Type Provider in Deployment Manager, and use the new type to
create the DaemonSet.
B. Use the Deployment Manager Runtime Configurator to create a new Config resource that contains
the DaemonSet definition.
C. With Deployment Manager, create a Compute Engine instance with a startup script that uses
kubectl to create the DaemonSet.
D. In the cluster’s definition in Deployment Manager, add a metadata that has kube-system as key
and the DaemonSet manifest as value.
Answer: A
Explanation:
Adding an API as a type provider
This page describes how to add an API to Google Cloud Deployment Manager as a type provider. To
learn more about types and type providers, read the Types overview documentation.
A type provider exposes all of the resources of a third-party API to Deployment Manager as base
types that you can use in your configurations. These types must be directly served by a RESTful API
that supports Create, Read, Update, and Delete (CRUD).
If you want to use an API that is not automatically provided by Google with Deployment Manager, you
must add the API as a type provider.
https://cloud.google.com/deployment-manager/docs/configuration/type-providers/creating-type-
provider
32. In the Snapshot Schedule section, select Create Schedule and configure the following
parameters:
CSchedule frequency: Daily
CStart time: 1:00 AM C 2:00 AM
CAutodelete snapshots after 30 days
C. 1. Create a Cloud Function that creates a snapshot of your instance’s disk.
33.You have deployed an application on a Compute Engine instance. An external consultant needs to
access the Linux-based instance. The consultant is connected to your corporate network through a
VPN connection, but the consultant has no Google account.
What should you do?
A. Instruct the external consultant to use the gcloud compute ssh command line tool by using Identity-
Aware Proxy to access the instance.
B. Instruct the external consultant to use the gcloud compute ssh command line tool by using the
public IP address of the instance to access it.
C. Instruct the external consultant to generate an SSH key pair, and request the public key from the
consultant. Add the public key to the instance yourself, and have the consultant access the instance
through SSH with their private key.
D. Instruct the external consultant to generate an SSH key pair, and request the private key from the
consultant. Add the private key to the instance yourself, and have the consultant access the instance
through SSH with their public key.
Answer: C
Explanation:
The best option is to instruct the external consultant to generate an SSH key pair, and request the
public key from the consultant. Then, add the public key to the instance yourself, and have the
consultant access the instance through SSH with their private key. This way, you can grant the
consultant access to the instance without requiring a Google account or exposing the instance’s
public IP address. This option also follows the best practice of using user-managed SSH keys instead
of service account keys for SSH access1.
Option A is not feasible because the external consultant does not have a Google account, and
therefore cannot use Identity-Aware Proxy (IAP) to access the instance. IAP requires the user to
authenticate with a Google account and have the appropriate IAM permissions to access the
instance2. Option B is not secure because it exposes the instance’s public IP address, which can
increase the risk of unauthorized access or attacks. Option D is not correct because it reverses the
roles of the public and private keys. The public key should be added to the instance, and the private
key should be kept by the consultant. Sharing the private key with anyone else can compromise the
security of the SSH connection3.
Reference:
1: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys
2: https://cloud.google.com/iap/docs/using-tcp-forwarding
3: https://cloud.google.com/compute/docs/instances/connecting-advanced#sshbetweeninstances
34.Your managed instance group raised an alert stating that new instance creation has failed to
create new instances. You need to maintain the number of running instances specified by the
template to be able to process expected application traffic.
What should you do?
A. Create an instance template that contains valid syntax which will be used by the instance group.
Delete any persistent disks with the same name as instance names.
B. Create an instance template that contains valid syntax that will be used by the instance group.
Verify that the instance name and persistent disk name values are not the same in the template.
C. Verify that the instance template being used by the instance group contains valid syntax. Delete
any persistent disks with the same name as instance names. Set the disks.autoDelete property to
true in the instance template.
D. Delete the current instance template and replace it with a new instance template. Verify that the
instance name and persistent disk name values are not the same in the template. Set the
disks.autoDelete property to true in the instance template.
Answer: A
Explanation:
https://cloud.google.com/compute/docs/troubleshooting/troubleshooting-migs
https://cloud.google.com/compute/docs/instance-templates#how_to_update_instance_templates
35. Add CPU and network Charts (or each of (he three projects
B. 1. Create a Cloud Monitoring Dashboard.
36.You host your website on Compute Engine. The number of global users visiting your website is
rapidly expanding. You need to minimize latency and support user growth in multiple geographical
regions. You also want to follow Google-recommended practices and minimize operational costs.
Which two actions should you take? Choose 2 answers
A. Deploy all of your VMs in a single Google Cloud region with the largest available CIDR range.
B. Deploy your VMs in multiple Google Cloud regions closest to your users’ geographical locations.
C. Use an external Application Load Balancer in Regional mode.
D. Use an external Application Load Balancer in Global mode.
E. Use a Network Load Balancer.
Answer: BD
Explanation:
To minimize latency for a global user base, it's crucial to serve users from regions geographically
close to them. Deploying VMs in multiple Google Cloud regions (Option B) achieves this by reducing
the network distance and thus the round-trip time for requests.
To support user growth and provide a single point of entry with global reach, a global external
Application Load Balancer (Option D) is the recommended choice for web applications. It distributes
traffic to backend instances across multiple regions based on user proximity, capacity, and health.
Application Load Balancers also offer features like SSL termination, content-based routing, and
security policies, which are important for modern web applications.
* Option A: Deploying in a single region, regardless of the CIDR range, will result in high latency for
users far from that region.
* Option C: A regional external Application Load Balancer only distributes traffic within a single region,
not across multiple global regions, thus not effectively minimizing latency for all global users.
* Option E: Network Load Balancers operate at Layer 4 and don't offer the application-level routing
and features of an Application Load Balancer, which are generally preferred for web applications.
While they can be global, Application Load Balancers are better suited for this scenario.
Reference to Google Cloud Certified - Associate Cloud Engineer Documents:
The concepts of multi-region deployments for low latency and the use of global load balancers
(specifically Application Load Balancers for web traffic) for global reach and traffic management are
core topics in the Compute Engine and Load Balancing sections of the Google Cloud documentation,
which are essential for the Associate Cloud Engineer certification. The best practices for global
application deployment are emphasized.
37.Your company developed an application to deploy on Google Kubernetes Engine. Certain parts of
the application are not fault-tolerant and are allowed to have downtime Other parts of the application
are critical and must always be available. You need to configure a Goorje Kubernfl:es Engine duster
while optimizing for cost.
What should you do?
A. Create a cluster with a single node-pool by using standard VMs. Label the fault-tolerant
Deployments as spot-true.
B. Create a cluster with a single node-pool by using Spot VMs. Label the critical Deployments as spot-
false.
C. Create a cluster with both a Spot W node pool and a rode pool by using standard VMs Deploy the
critical. deployments on the Spot VM node pool and the fault; tolerant deployments on the node pool
by using standard VMs.
D. Create a cluster with both a Spot VM node pool and by using standard VMs. Deploy the critical
deployments on the mode pool by using standard VMs and the fault-tolerant deployments on the Spot
VM node pool.
Answer: C
38.You have a batch workload that runs every night and uses a large number of virtual machines
(VMs). It is fault- tolerant and can tolerate some of the VMs being terminated. The current cost of VMs
is too high.
What should you do?
A. Run a test using simulated maintenance events. If the test is successful, use preemptible N1
Standard VMs when running future jobs.
B. Run a test using simulated maintenance events. If the test is successful, use N1 Standard VMs
when running future jobs.
C. Run a test using a managed instance group. If the test is successful, use N1 Standard VMs in the
managed instance group when running future jobs.
D. Run a test using N1 standard VMs instead of N2. If the test is successful, use N1 Standard VMs
when running future jobs.
Answer: A
Explanation:
Creating and starting a preemptible VM instance This page explains how to create and use a
preemptible virtual machine (VM) instance. A preemptible instance is an instance you can create and
run at a much lower price than normal instances. However, Compute Engine might terminate
(preempt) these instances if it requires access to those resources for other tasks. Preemptible
instances will always terminate after 24 hours. To learn more about preemptible instances, read the
preemptible instances documentation. Preemptible instances are recommended only for fault-tolerant
applications that can withstand instance preemptions. Make sure your application can handle
preemptions before you decide to create a preemptible instance. To understand the risks and value of
preemptible instances, read the preemptible instances documentation.
https://cloud.google.com/compute/docs/instances/create-start-preemptible-instance
39.You have a virtual machine that is currently configured with 2 vCPUs and 4 GB of memory. It is
running out of memory. You want to upgrade the virtual machine to have 8 GB of memory.
What should you do?
A. Rely on live migration to move the workload to a machine with more memory.
B. Use gcloud to add metadata to the VM. Set the key to required-memory-size and the value to 8
GB.
C. Stop the VM, change the machine type to n1-standard-8, and start the VM.
D. Stop the VM, increase the memory to 8 GB, and start the VM.
Answer: D
Explanation:
In Google compute engine, if predefined machine types don’t meet your needs, you can create an
instance with custom virtualized hardware settings. Specifically, you can create an instance with a
custom number of vCPUs and custom memory, effectively using a custom machine type. Custom
machine types are ideal for the following scenarios:
40. Locate the project in the GCP console, click Shut down and then enter the project ID.
B. 1. Verify that you are assigned the Project Owners IAM role for this project.
41.For analysis purposes, you need to send all the logs from all of your Compute Engine instances to
a BigQuery dataset called platform-logs. You have already installed the Stackdriver Logging agent on
all the instances. You want to minimize cost.
What should you do?
A. 1. Give the BigQuery Data Editor role on the platform-logs dataset to the service accounts used by
your instances.
42.You just installed the Google Cloud CLI on your new corporate laptop. You need to list the existing
instances of your company on Google Cloud.
What must you do before you run the gcloud compute instances list command? Choose 2 answers
A. Run gcloud auth login, enter your login credentials in the dialog window, and paste the received
login token to gcloud CLI.
B. Create a Google Cloud service account, and download the service account key. Place the key file
in a folder on your machine where gcloud CLI can find it.
C. Download your Cloud Identity user account key. Place the key file in a folder on your machine
where gcloud CLI can find it.
D. Run gcloud config set compute/zone $my_zone to set the default zone for gcloud CLI.
E. Run gcloud config set project $my_project to set the default project for gcloud CLI.
Answer: AE
Explanation:
Before you run the gcloud compute instances list command, you need to do two things: authenticate
with your user account and set the default project for gcloud CLI.
To authenticate with your user account, you need to run gcloud auth login, enter your login
credentials in the dialog window, and paste the received login token to gcloud CLI. This will authorize
the gcloud CLI to access Google Cloud resources on your behalf1.
To set the default project for gcloud CLI, you need to run gcloud config set project $my_project,
where $my_project is the ID of the project that contains the instances you want to list. This will save
you from having to specify the project flag for every gcloud command2.
Option B is not recommended, because using a service account key increases the risk of credential
leakage and misuse. It is also not necessary, because you can use your user account to authenticate
to the gcloud CLI3. Option C is not correct, because there is no such thing as a Cloud Identity user
account key. Cloud Identity is a service that provides identity and access management for Google
Cloud users and groups4. Option D is not required, because the gcloud compute instances list
command does not depend on the default zone. You can list instances from all zones or filter by a
specific zone using the --filter flag.
Reference:
1: https://cloud.google.com/sdk/docs/authorizing
2: https://cloud.google.com/sdk/gcloud/reference/config/set
3: https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys
4: https://cloud.google.com/identity/docs/overview
: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
44.You want to permanently delete a Pub/Sub topic managed by Config Connector in your Google
Cloud project.
What should you do?
A. Use kubect1 to delete the topic resource.
B. Use gcloud CLI to delete the topic.
C. Use kubect1 to create the label deleted-by-cnrm and to change its value to true for the topic
resource.
D. Use gcloud CLI to update the topic label managed-by-cnrm to false.
Answer: C
45.You want to enable your development team to deploy new features to an existing Cloud Run
service in production. To minimize the risk associated with a new revision, you want to reduce the
number of customers who might be affected by an outage without introducing any development or
operational costs to your customers. You want to follow Google-recommended practices for
managing revisions to a service.
What should you do9
A. Deploy your application to a second Cloud Run service, and ask your customers to use the second
Cloud Run service.
B. Ask your customers to retry access to your service with exponential backoff to mitigate any
potential problems after the new revision is deployed.
C. Gradually roll out the new revision and split customer traffic between the revisions to allow rollback
in case a problem occurs.
D. Send all customer traffic to the new revision, and roll back to a previous revision if you witness any
problems in production.
Answer: C
46.You want to host your video encoding software on Compute Engine. Your user base is growing
rapidly, and users need to be able 3 to encode their videos at any time without interruption or CPU
limitations. You must ensure that your encoding solution is highly available, and you want to follow
Google-recommended practices to automate operations.
What should you do?
A. Deploy your solution on multiple standalone Compute Engine instances, and increase the number
of existing instances wnen CPU utilization on Cloud Monitoring reaches a certain threshold.
B. Deploy your solution on multiple standalone Compute Engine instances, and replace existing
instances with high-CPU
instances when CPU utilization on Cloud Monitoring reaches a certain threshold.
C. Deploy your solution to an instance group, and increase the number of available instances
whenever you see high CPU utilization in Cloud Monitoring.
D. Deploy your solution to an instance group, and set the autoscaling based on CPU utilization.
Answer: D
Explanation:
Instance groups are collections of virtual machine (VM) instances that you can manage as a single
entity. Instance groups can help you simplify the management of multiple instances, reduce
operational costs, and improve the availability and performance of your applications. Instance groups
support autoscaling, which automatically adds or removes instances from the group based on
increases or decreases in load. Autoscaling helps your applications gracefully handle increases in
traffic and reduces cost when the need for resources is lower. You can set the autoscaling policy
based on CPU utilization, load balancing capacity, Cloud Monitoring metrics, or a queue-based
workload. In this case, since the video encoding software is CPU-intensive, setting the autoscaling
based on CPU utilization is the best option to ensure high availability and optimal performance.
Reference: Instance groups
Autoscaling groups of instances
47. Create a new Google Cloud Project for the Marketing department 3. Link the new project to a
Marketing Billing Account.
D. 1. Verity that you are assigned the Organization Administrator IAM role for your organization's
Google Cloud account
48.Your company requires that Google Cloud products are created with a specific configuration to
comply with your company's security policies You need to implement a mechanism that will allow
software engineers at your company to deploy and update Google Cloud products in a preconfigured
and approved manner.
What should you do?
A. Create Java packages that utilize the Google Cloud Client Libraries for Java to configure Google
Cloud products. Store and share the packages in a source code repository.
B. Create bash scripts that utilize the Google Cloud CLI to configure Google Cloud products. Store
and share the bash scripts in a source code repository.
C. Create Terraform modules that utilize the Google Cloud Terraform Provider to configure Google
Cloud products. Store and share the modules in a source code repository.
D. Use the Google Cloud APIs by using curl to configure Google Cloud products. Store and share the
curl commands in a source code repository.
Answer: C
49.You are planning to migrate the following on-premises data management solutions to Google
Cloud:
• One MySQL cluster for your main database
• Apache Kafka for your event streaming platform
• One Cloud SOL for PostgreSOL database for your analytical and reporting needs
You want to implement Google-recommended solutions for the migration. You need to ensure that the
new solutions provide global scalability and require minimal operational and infrastructure
management.
What should you do?
A. Migrate from MySQL to Cloud SQL, from Kafka to Memorystore, and from Cloud SQL for
PostgreSQL to Cloud SQL
B. Migrate from MySQL to Cloud Spanner, from Kafka to Memorystore, and from Cloud SOL for
PostgreSQL to Cloud SQL
C. Migrate from MySQL to Cloud SOL, from Kafka to Pub/Sub, and from Cloud SOL for PostgreSQL
to BigQuery.
D. Migrate from MySQL to Cloud Spanner, from Kafka to Pub/Sub. and from Cloud SQL for
PostgreSQL to BigQuery
Answer: D
50.You have an instance group that you want to load balance. You want the load balancer to
terminate the client SSL session. The instance group is used to serve a public web application over
HTTPS. You want to follow Google-recommended practices.
What should you do?
A. Configure an HTTP(S) load balancer.
B. Configure an internal TCP load balancer.
C. Configure an external SSL proxy load balancer.
D. Configure an external TCP proxy load balancer.
Answer: A
Explanation:
Reference: https://cloud.google.com/load-balancing/docs/https/
According to this guide for setting up an HTTP (S) load balancer in GCP: The client SSL session
terminates at the load balancer. Sessions between the load balancer and the instance can either be
HTTPS (recommended) or HTTP.
https://cloud.google.com/load-balancing/docs/ssl
52.Your company’s infrastructure is on-premises, but all machines are running at maximum capacity.
You want to burst to Google Cloud. The workloads on Google Cloud must be able to directly
communicate to the workloads on-premises using a private IP range.
What should you do?
A. In Google Cloud, configure the VPC as a host for Shared VPC.
B. In Google Cloud, configure the VPC for VPC Network Peering.
C. Create bastion hosts both in your on-premises environment and on Google Cloud. Configure both
as proxy servers using their public IP addresses.
D. Set up Cloud VPN between the infrastructure on-premises and Google Cloud.
Answer: D
Explanation:
"Google Cloud VPC Network Peering allows internal IP address connectivity across two Virtual
Private Cloud (VPC) networks regardless of whether they belong to the same project or the same
organization."
https://cloud.google.com/vpc/docs/vpc-peering
while
"Cloud Interconnect provides low latency, high availability connections that enable you to reliably
transfer data between your on-premises and Google Cloud Virtual Private Cloud (VPC) networks."
https://cloud.google.com/network-connectivity/docs/interconnect/concepts/overview and "HA VPN is a
high-availability (HA) Cloud VPN solution that lets you securely connect your on-premises network to
your VPC network through an IPsec VPN connection in a single region."
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
53. Create a Cloud Function that is triggered by messages in the logs topic.
54.You have a workload running on Compute Engine that is critical to your business. You want to
ensure that the data on the boot disk of this workload is backed up regularly. You need to be able to
restore a backup as quickly as possible in case of disaster. You also want older backups to be
cleaned automatically to save on cost. You want to follow Google-recommended practices.
What should you do?
A. Create a Cloud Function to create an instance template.
B. Create a snapshot schedule for the disk using the desired interval.
C. Create a cron job to create a new disk from the disk using gcloud.
D. Create a Cloud Task to create an image and export it to Cloud Storage.
Answer: B
Explanation:
Best practices for persistent disk snapshots
You can create persistent disk snapshots at any time, but you can create snapshots more quickly and
with greater reliability if you use the following best practices.
Creating frequent snapshots efficiently
Use snapshots to manage your data efficiently.
Create a snapshot of your data on a regular schedule to minimize data loss due to unexpected failure.
Improve performance by eliminating excessive snapshot downloads and by creating an image and
reusing it.
Set your snapshot schedule to off-peak hours to reduce snapshot time.
Snapshot frequency limits
Creating snapshots from persistent disks
You can snapshot your disks at most once every 10 minutes. If you want to issue a burst of requests
to snapshot your disks, you can issue at most 6 requests in 60 minutes.
If the limit is exceeded, the operation fails and returns the following error:
https://cloud.google.com/compute/docs/disks/snapshot-best-practices
55.Your company uses BigQuery for data warehousing. Over time, many different business units in
your company have created 1000+ datasets across hundreds of projects. Your CIO wants you to
examine all datasets to find tables that contain an employee_ssn column. You want to minimize effort
in performing this task.
What should you do?
A. Go to Data Catalog and search for employee_ssn in the search box.
B. Write a shell script that uses the bq command line tool to loop through all the projects in your
organization.
C. Write a script that loops through all the projects in your organization and runs a query on
INFORMATION_SCHEMA.COLUMNS view to find the employee_ssn column.
D. Write a Cloud Dataflow job that loops through all the projects in your organization and runs a query
on INFORMATION_SCHEMA.COLUMNS view to find employee_ssn column.
Answer: A
Explanation:
https://cloud.google.com/bigquery/docs/quickstarts/quickstart-web-ui?authuser=4
56.You are migrating a production-critical on-premises application that requires 96 vCPUs to perform
its task. You want to make sure the application runs in a similar environment on GCP.
What should you do?
A. When creating the VM, use machine type n1-standard-96.
B. When creating the VM, use Intel Skylake as the CPU platform.
C. Create the VM using Compute Engine default settings. Use gcloud to modify the running instance
to have 96 vCPUs.
D. Start the VM using Compute Engine default settings, and adjust as you go based on Rightsizing
Recommendations.
Answer: A
Explanation:
Ref: https://cloud.google.com/compute/docs/machine-types#n1_machine_type
57.Your company is moving its continuous integration and delivery (CI/CD) pipeline to Compute
Engine instances. The pipeline will manage the entire cloud infrastructure through code.
How can you ensure that the pipeline has appropriate permissions while your system is following
security best practices?
A. • Add a step for human approval to the CI/CD pipeline before the execution of the infrastructure
provisioning.
• Use the human approvals IAM account for the provisioning.
B. • Attach a single service account to the compute instances.
• Add minimal rights to the service account.
• Allow the service account to impersonate a Cloud Identity user with elevated permissions to create,
update, or delete resources.
C. • Attach a single service account to the compute instances.
• Add all required Identity and Access Management (IAM) permissions to this service account to
create, update, or delete resources
D. • Create multiple service accounts, one for each pipeline with the appropriate minimal Identity and
Access Management (IAM) permissions.
• Use a secret manager service to store the key files of the service accounts.
• Allow the CI/CD pipeline to request the appropriate secrets during the execution of the pipeline.
Answer: B
Explanation:
The best option is to attach a single service account to the compute instances and add minimal rights
to the service account. Then, allow the service account to impersonate a Cloud Identity user with
elevated permissions to create, update, or delete resources. This way, the service account can use
short-lived access tokens to authenticate to Google Cloud APIs without needing to manage service
account keys. This option follows the principle of least privilege and reduces the risk of credential
leakage and misuse.
Option A is not recommended because it requires human intervention, which can slow down the
CI/CD pipeline and introduce human errors. Option C is not secure because it grants all required IAM
permissions to a single service account, which can increase the impact of a compromised key.
Option D is not cost-effective because it requires creating and managing multiple service accounts
and keys, as well as using a secret manager service.
Reference:
1: https://cloud.google.com/iam/docs/impersonating-service-accounts
2: https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys
3: https://cloud.google.com/iam/docs/understanding-service-accounts
58.Your organization has three existing Google Cloud projects. You need to bill the Marketing
department for only their Google Cloud services for a new initiative within their group.
What should you do?
A. 1. Verify that you ace assigned the Billing Administrator IAM role tor your organization's Google
Cloud Project for the Marketing department
59.Your company has embraced a hybrid cloud strategy where some of the applications are deployed
on Google Cloud. A Virtual Private Network (VPN) tunnel connects your Virtual Private Cloud (VPC)
in Google Cloud with your company's on-premises network. Multiple applications in Google Cloud
need to connect to an on-premises database server, and you want to avoid having to change the IP
configuration in all of your applications when the IP of the database changes.
What should you do?
A. Configure Cloud NAT for all subnets of your VPC to be used when egressing from the VM
instances.
B. Create a private zone on Cloud DNS, and configure the applications with the DNS name.
C. Configure the IP of the database as custom metadata for each instance, and query the metadata
server.
D. Query the Compute Engine internal DNS from the applications to retrieve the IP of the database.
Answer: B
Explanation:
Forwarding zones Cloud DNS forwarding zones let you configure target name servers for specific
private zones. Using a forwarding zone is one way to implement outbound DNS forwarding from your
VPC network. A Cloud DNS forwarding zone is a special type of Cloud DNS private zone. Instead of
creating records within the zone, you specify a set of forwarding targets. Each forwarding target is an
IP address of a DNS server, located in your VPC network, or in an on-premises network connected to
your VPC network by Cloud VPN or Cloud Interconnect. https://cloud.google.com/nat/docs/overview
DNS configuration Your on-premises network must have DNS zones and records configured so that
Google domain names resolve to the set of IP addresses for either private.googleapis.com or
restricted.googleapis.com. You can create Cloud DNS managed private zones and use a Cloud DNS
inbound server policy, or you can configure on-premises name servers. For example, you can use
BIND or Microsoft Active Directory DNS. https://cloud.google.com/vpc/docs/configure-private-google-
access-hybrid#config-domain
60.You are designing an application that lets users upload and share photos. You expect your
application to grow really fast and you are targeting a worldwide audience. You want to delete
uploaded photos after 30 days. You want to minimize costs while ensuring your application is highly
available.
Which GCP storage solution should you choose?
A. Persistent SSD on VM instances.
B. Cloud Filestore.
C. Multiregional Cloud Storage bucket.
D. Cloud Datastore database.
Answer: C
Explanation:
Cloud Storage allows world-wide storage and retrieval of any amount of data at any time. We dont
need to set up auto-scaling ourselves. Cloud Storage autoscaling is managed by GCP. Cloud Storage
is an object store so it is suitable for storing photos. Cloud Storage allows world-wide storage and
retrieval so cater well to our worldwide audience. Cloud storage provides us lifecycle rules that can be
configured to automatically delete objects older than 30 days. This also fits our requirements. Finally,
Google Cloud Storage offers several storage classes such as Nearline Storage ($0.01 per GB per
Month) Coldline Storage ($0.007 per GB per Month) and Archive Storage ($0.004 per GB per month)
which are significantly cheaper than any of the options above.
Ref: https://cloud.google.com/storage/docs
Ref: https://cloud.google.com/storage/pricing
61.1.Every employee of your company has a Google account. Your operational team needs to
manage a large number of instances on Compute Engine. Each member of this team needs only
administrative access to the servers. Your security team wants to ensure that the deployment of
credentials is operationally efficient and must be able to determine who accessed a given instance.
What should you do?
A. Generate a new SSH key pair. Give the private key to each member of your team. Configure the
public key in the metadata of each instance.
B. Ask each member of the team to generate a new SSH key pair and to send you their public key.
Use a configuration management tool to deploy those keys on each instance.
C. Ask each member of the team to generate a new SSH key pair and to add the public key to their
Google account. Grant the “compute.osAdminLogin” role to the Google group corresponding to this
team.
D. Generate a new SSH key pair. Give the private key to each member of your team. Configure the
public key as a project-wide public SSH key in your Cloud Platform project and allow project-wide
public SSH keys on each instance.
Answer: C
Explanation:
https://cloud.google.com/compute/docs/instances/managing-instance-access
62.You have an on-premises data analytics set of binaries that processes data files in memory for
about 45 minutes every midnight. The sizes of those data files range from 1 gigabyte to 16 gigabytes.
You want to migrate this application to Google Cloud with minimal effort and cost.
What should you do?
A. Upload the code to Cloud Functions. Use Cloud Scheduler to start the application.
B. Create a container for the set of binaries. Use Cloud Scheduler to start a Cloud Run job for the
container.
C. Create a container for the set of binaries Deploy the container to Google Kubernetes Engine (GKE)
and use the Kubernetes scheduler to start the application.
D. Lift and shift to a VM on Compute Engine. Use an instance schedule to start and stop the instance.
Answer: B
63.Your company is using Google Workspace to manage employee accounts. Anticipated growth will
increase the number of personnel from 100 employees to 1.000 employees within 2 years. Most
employees will need access to your company's Google Cloud account. The systems and processes
will need to support 10x growth without performance degradation, unnecessary complexity, or
security issues.
What should you do?
A. Migrate the users to Active Directory. Connect the Human Resources system to Active Directory.
Turn on Google Cloud Directory Sync (GCDS) for Cloud Identity. Turn on Identity Federation from
Cloud Identity to Active Directory.
B. Organize the users in Cloud Identity into groups. Enforce multi-factor authentication in Cloud
Identity.
C. Turn on identity federation between Cloud Identity and Google Workspace. Enforce multi-factor
authentication for domain wide delegation.
D. Use a third-party identity provider service through federation. Synchronize the users from Google
Workplace to the third-party provider in real time.
Answer: B
64.Your continuous integration and delivery (CI/CD) server can't execute Google Cloud actions in a
specific project because of permission issues. You need to validate whether the used service account
has the appropriate roles in the specific project.
What should you do?
A. Open the Google Cloud console, and run a query to determine which resources this service
account can access.
B. Open the Google Cloud console, and run a query of the audit logs to find permission denied errors
for this service account.
C. Open the Google Cloud console, and check the organization policies.
D. Open the Google Cloud console, and check the Identity and Access Management (IAM) roles
assigned to the service account at the project or inherited from the folder or organization levels.
Answer: D
Explanation:
This answer is the most effective way to validate whether the service account used by the CI/CD
server has the appropriate roles in the specific project. By checking the IAM roles assigned to the
service account, you can see which permissions the service account has and which resources it can
access. You can also check if the service account inherits any roles from the folder or organization
levels, which may affect its access to the project. You can use the Google Cloud console, the gcloud
command-line tool, or the IAM API to view the IAM roles of a service account.