0% found this document useful (0 votes)
15 views24 pages

Associate Cloud Engineer - 0

Uploaded by

Louis Junior
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views24 pages

Associate Cloud Engineer - 0

Uploaded by

Louis Junior
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps

https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

Exam Questions Associate-Cloud-Engineer


Google Cloud Certified - Associate Cloud Engineer

https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

NEW QUESTION 1
Your organization uses Active Directory (AD) to manage user identities. Each user uses this identity for federated access to various on-premises systems. Your
security team has adopted a policy that requires users to log into Google Cloud with their AD identity instead of their own login. You want to follow the
Google-recommended practices to implement this policy. What should you do?

A. Sync Identities with Cloud Directory Sync, and then enable SAML for single sign-on
B. Sync Identities in the Google Admin console, and then enable Oauth for single sign-on
C. Sync identities with 3rd party LDAP sync, and then copy passwords to allow simplified login with (he same credentials
D. Sync identities with Cloud Directory Sync, and then copy passwords to allow simplified login with the same credentials.

Answer: A

NEW QUESTION 2
Your coworker has helped you set up several configurations for gcloud. You've noticed that you're running commands against the wrong project. Being new to the
company, you haven't yet memorized any of the projects. With the fewest steps possible, what's the fastest way to switch to the correct configuration?

A. Run gcloud configurations list followed by gcloud configurations activate .


B. Run gcloud config list followed by gcloud config activate.
C. Run gcloud config configurations list followed by gcloud config configurations activate.
D. Re-authenticate with the gcloud auth login command and select the correct configurations on login.

Answer: C

Explanation:
as gcloud config configurations list can help check for the existing configurations and activate can help switch to the configuration.
gcloud config configurations list lists existing named configurations
gcloud config configurations activate activates an existing named configuration
Obtains access credentials for your user account via a web-based authorization flow. When this command completes successfully, it sets the active account in the
current configuration to the account specified. If no configuration exists, it creates a configuration named default.

NEW QUESTION 3
Your learn wants to deploy a specific content management system (CMS) solution lo Google Cloud. You need a quick and easy way to deploy and install the
solution. What should you do?

A. Search for the CMS solution in Google Cloud Marketplac


B. Use gcloud CLI to deploy the solution.
C. Search for the CMS solution in Google Cloud Marketplac
D. Deploy the solution directly from Cloud Marketplace.
E. Search for the CMS solution in Google Cloud Marketplac
F. Use Terraform and the Cloud Marketplace ID to deploy the solution with the appropriate parameters.
G. Use the installation guide of the CMS provide
H. Perform the installation through your configuration management system.

Answer: B

NEW QUESTION 4
You are developing a new application and are looking for a Jenkins installation to build and deploy your source code. You want to automate the installation as
quickly and easily as possible. What should you do?

A. Deploy Jenkins through the Google Cloud Marketplace.


B. Create a new Compute Engine instanc
C. Run the Jenkins executable.
D. Create a new Kubernetes Engine cluste
E. Create a deployment for the Jenkins image.
F. Create an instance template with the Jenkins executabl
G. Create a managed instance group with this template.

Answer: A

Explanation:

Installing Jenkins
In this section, you use Cloud Marketplace to provision a Jenkins instance. You customize this instance to use the agent image you created in the previous
section.
Go to the Cloud Marketplace solution for Jenkins. Click Launch on Compute Engine.
Change the Machine Type field to 4 vCPUs 15 GB Memory, n1-standard-4.
Machine type selection for Jenkins deployment.
Click Deploy and wait for your Jenkins instance to finish being provisioned. When it is finished, you will see: Jenkins has been deployed.
https://cloud.google.com/solutions/using-jenkins-for-distributed-builds-on-compute-engine#installing_jenkins

NEW QUESTION 5
You have a development project with appropriate IAM roles defined. You are creating a production project and want to have the same IAM roles on the new
project, using the fewest possible steps. What should you do?

A. Use gcloud iam roles copy and specify the production project as the destination project.
B. Use gcloud iam roles copy and specify your organization as the destination organization.
C. In the Google Cloud Platform Console, use the ‘create role from role’ functionality.

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

D. In the Google Cloud Platform Console, use the ‘create role’ functionality and select all applicable permissions.

Answer: A

NEW QUESTION 6
Your team is using Linux instances on Google Cloud. You need to ensure that your team logs in to these instances in the most secure and cost efficient way. What
should you do?

A. Attach a public IP to the instances and allow incoming connections from the internet on port 22 for SSH.
B. Use a third party tool to provide remote access to the instances.
C. Use the gcloud compute ssh command with the --tunnel-through-iap fla
D. Allow ingress traffic from the IP range 35.235.240.0/20 on port 22.
E. Create a bastion host with public internet acces
F. Create the SSH tunnel to the instance through the bastion host.

Answer: D

NEW QUESTION 7
Your company runs its Linux workloads on Compute Engine instances. Your company will be working with a new operations partner that does not use Google
Accounts. You need to grant access to the instances to your operations partner so they can maintain the installed tooling. What should you do?

A. Enable Cloud IAP for the Compute Engine instances, and add the operations partner as a Cloud IAP Tunnel User.
B. Tag all the instances with the same network ta
C. Create a firewall rule in the VPC to grant TCP access on port 22 for traffic from the operations partner to instances with the network tag.
D. Set up Cloud VPN between your Google Cloud VPC and the internal network of the operations partner.
E. Ask the operations partner to generate SSH key pairs, and add the public keys to the VM instances.

Answer: D

Explanation:
IAP controls access to your App Engine apps and Compute Engine VMs running on Google Cloud. It leverages user identity and the context of a request to
determine if a user should be allowed access. IAP is a building block toward BeyondCorp, an enterprise security model that enables employees to work from
untrusted networks without using a VPN.
By default, IAP uses Google identities and IAM. By leveraging Identity Platform instead, you can authenticate users with a wide range of external identity providers,
such as:
Email/password
OAuth (Google, Facebook, Twitter, GitHub, Microsoft, etc.) SAML
OIDC
Phone number Custom Anonymous
This is useful if your application is already using an external authentication system, and migrating your users to Google accounts is impractical.
https://cloud.google.com/iap/docs/using-tcp-forwarding#grant-permission

NEW QUESTION 8
You have an application on a general-purpose Compute Engine instance that is experiencing excessive disk read throttling on its Zonal SSD Persistent Disk. The
application primarily reads large files from disk. The disk size is currently 350 GB. You want to provide the maximum amount of throughput while minimizing costs.
What should you do?

A. Increase the size of the disk to 1 TB.


B. Increase the allocated CPU to the instance.
C. Migrate to use a Local SSD on the instance.
D. Migrate to use a Regional SSD on the instance.

Answer: C

Explanation:
Standard persistent disks are efficient and economical for handling sequential read/write operations, but they aren't optimized to handle high rates of random
input/output operations per second (IOPS). If your apps require high rates of random IOPS, use SSD persistent disks. SSD persistent disks are designed for single-
digit millisecond latencies. Observed latency is application specific.

NEW QUESTION 9
Every employee of your company has a Google account. Your operational team needs to manage a large number of instances on Compute Engine. Each member
of this team needs only administrative access to the servers. Your security team wants to ensure that the deployment of credentials is operationally efficient and
must be able to determine who accessed a given instance. What should you do?

A. Generate a new SSH key pai


B. Give the private key to each member of your tea
C. Configure the public key in the metadata of each instance.
D. Ask each member of the team to generate a new SSH key pair and to send you their public ke
E. Use a configuration management tool to deploy those keys on each instance.
F. Ask each member of the team to generate a new SSH key pair and to add the public key to their Google accoun
G. Grant the “compute.osAdminLogin” role to the Google group corresponding to this team.
H. Generate a new SSH key pai
I. Give the private key to each member of your tea
J. Configure the public key as a project-wide public SSH key in your Cloud Platform project and allow project-wide public SSH keys on each instance.

Answer: C

Explanation:
https://cloud.google.com/compute/docs/instances/managing-instance-access

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

NEW QUESTION 10
You need to set a budget alert for use of Compute Engineer services on one of the three Google Cloud Platform projects that you manage. All three projects are
linked to a single billing account. What should you do?

A. Verify that you are the project billing administrato


B. Select the associated billing account and create a budget and alert for the appropriate project.
C. Verify that you are the project billing administrato
D. Select the associated billing account and create a budget and a custom alert.
E. Verify that you are the project administrato
F. Select the associated billing account and create a budget for the appropriate project.
G. Verify that you are project administrato
H. Select the associated billing account and create a budget and a custom alert.

Answer: A

Explanation:
https://cloud.google.com/iam/docs/understanding-roles#billing-roles

NEW QUESTION 10
Your company wants to standardize the creation and management of multiple Google Cloud resources using Infrastructure as Code. You want to minimize the
amount of repetitive code needed to manage the environment What should you do?

A. Create a bash script that contains all requirement steps as gcloud commands
B. Develop templates for the environment using Cloud Deployment Manager
C. Use curl in a terminal to send a REST request to the relevant Google API for each individual resource.
D. Use the Cloud Console interface to provision and manage all related resources

Answer: B

Explanation:
You can use Google Cloud Deployment Manager to create a set of Google Cloud resources and manage them as a unit, called a deployment. For example, if your
team's development environment needs two virtual machines (VMs) and a BigQuery database, you can define these resources in a configuration file, and use
Deployment Manager to create, change, or delete these resources. You can make the configuration file part of your team's code repository, so that anyone can
create the same environment with consistent results. https://cloud.google.com/deployment-manager/docs/quickstart

NEW QUESTION 13
You have 32 GB of data in a single file that you need to upload to a Nearline Storage bucket. The WAN connection you are using is rated at 1 Gbps, and you are
the only one on the connection. You want to use as much of the rated 1 Gbps as possible to transfer the file rapidly. How should you upload the file?

A. Use the GCP Console to transfer the file instead of gsutil.


B. Enable parallel composite uploads using gsutil on the file transfer.
C. Decrease the TCP window size on the machine initiating the transfer.
D. Change the storage class of the bucket from Nearline to Multi-Regional.

Answer: B

Explanation:
https://cloud.google.com/storage/docs/parallel-composite-uploads https://cloud.google.com/storage/docs/uploads-downloads#parallel-composite-uploads

NEW QUESTION 18
You are managing a Data Warehouse on BigQuery. An external auditor will review your company's processes, and multiple external consultants will need view
access to the data. You need to provide them with view access while following Google-recommended practices. What should you do?

A. Grant each individual external consultant the role of BigQuery Editor


B. Grant each individual external consultant the role of BigQuery Viewer
C. Create a Google Group that contains the consultants and grant the group the role of BigQuery Editor
D. Create a Google Group that contains the consultants, and grant the group the role of BigQuery Viewer

Answer: D

NEW QUESTION 23
You have been asked to migrate a docker application from datacenter to cloud. Your solution architect has suggested uploading docker images to GCR in one
project and running an application in a GKE cluster in a separate project. You want to store images in the project img-278322 and run the application in the project
prod-278986. You want to tag the image as acme_track_n_trace:v1. You want to follow Google-recommended practices. What should you do?

A. Run gcloud builds submit --tag gcr.io/img-278322/acme_track_n_trace


B. Run gcloud builds submit --tag gcr.io/img-278322/acme_track_n_trace:v1
C. Run gcloud builds submit --tag gcr.io/prod-278986/acme_track_n_trace
D. Run gcloud builds submit --tag gcr.io/prod-278986/acme_track_n_trace:v1

Answer: B

Explanation:
Run gcloud builds submit tag gcr.io/img-278322/acme_track_n_trace:v1. is the right answer.
This command correctly tags the image as acme_track_n_trace:v1 and uploads the image to the img-278322 project.
Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

NEW QUESTION 27
A company wants to build an application that stores images in a Cloud Storage bucket and wants to generate thumbnails as well as resize the images. They want
to use a google managed service that can scale up and scale down to zero automatically with minimal effort. You have been asked to recommend a service.
Which GCP service would you suggest?

A. Google Compute Engine


B. Google App Engine
C. Cloud Functions
D. Google Kubernetes Engine

Answer: C

Explanation:
Text Description automatically generated with low confidence

Cloud Functions is Google Cloud’s event-driven serverless compute platform. It automatically scales based on the load and requires no additional configuration.
You pay only for the resources used.
Ref: https://cloud.google.com/functions
While all other options i.e. Google Compute Engine, Google Kubernetes Engine, Google App Engine support autoscaling, it needs to be configured explicitly based
on the load and is not as trivial as the scale up or scale down offered by Google’s cloud functions.

NEW QUESTION 31
You have a web application deployed as a managed instance group. You have a new version of the application to gradually deploy. Your web application is
currently receiving live web traffic. You want to ensure that the available capacity does not decrease during the deployment. What should you do?

A. Perform a rolling-action start-update with maxSurge set to 0 and maxUnavailable set to 1.


B. Perform a rolling-action start-update with maxSurge set to 1 and maxUnavailable set to 0.
C. Create a new managed instance group with an updated instance templat
D. Add the group to the backend service for the load balance
E. When all instances in the new managed instance group are healthy, delete the old managed instance group.
F. Create a new instance template with the new application versio
G. Update the existing managed instance group with the new instance templat
H. Delete the instances in the managed instance group to allow the managed instance group to recreate the instance using the new instance template.

Answer: B

Explanation:
https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups#max_

NEW QUESTION 36
You installed the Google Cloud CLI on your workstation and set the proxy configuration. However, you are worried that your proxy credentials will be recorded in
the gcloud CLI logs. You want to prevent your proxy credentials from being logged What should you do?

A. Configure username and password by using gcloud configure set proxy/username and gcloud configure set proxy/ proxy/password commands.
B. Encode username and password in sha256 encoding, and save it to a text fil
C. Use filename as a value in the gcloud configure set core/custom_ca_certs_file command.
D. Provide values for CLOUDSDK_USERNAME and CLOUDSDK_PASSWORD in the gcloud CLI tool configure file.
E. Set the CLOUDSDK_PROXY_USERNAME and CLOUDSDK_PROXY PASSWORD properties by using environment variables in your command line tool.

Answer: D

NEW QUESTION 38
A colleague handed over a Google Cloud Platform project for you to maintain. As part of a security checkup, you want to review who has been granted the Project
Owner role. What should you do?

A. In the console, validate which SSH keys have been stored as project-wide keys.
B. Navigate to Identity-Aware Proxy and check the permissions for these resources.
C. Enable Audit Logs on the IAM & admin page for all resources, and validate the results.
D. Use the command gcloud projects get–iam–policy to view the current role assignments.

Answer: D

Explanation:
A simple approach would be to use the command flags available when listing all the IAM policy for a given project. For instance, the following command: `gcloud
projects get-iam-policy $PROJECT_ID
--flatten="bindings[].members" --format="table(bindings.members)" --filter="bindings.role:roles/owner"`
outputs all the users and service accounts associated with the role ‘roles/owner’ in the project in question. https://groups.google.com/g/google-cloud-
dev/c/Z6sZs7TvygQ?pli=1

NEW QUESTION 40

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

You are using Data Studio to visualize a table from your data warehouse that is built on top of BigQuery. Data is appended to the data warehouse during the day.
At night, the daily summary is recalculated by overwriting the table. You just noticed that the charts in Data Studio are broken, and you want to analyze the
problem. What should you do?

A. Use the BigQuery interface to review the nightly Job and look for any errors
B. Review the Error Reporting page in the Cloud Console to find any errors.
C. In Cloud Logging create a filter for your Data Studio report
D. Use the open source CLI too
E. Snapshot Debugger, to find out why the data was not refreshed correctly.

Answer: D

Explanation:
Cloud Debugger helps inspect the state of an application, at any code location, without stopping or slowing down the running app //
https://cloud.google.com/debugger/docs

NEW QUESTION 43
Your projects incurred more costs than you expected last month. Your research reveals that a development
GKE container emitted a huge number of logs, which resulted in higher costs. You want to disable the logs quickly using the minimum number of steps. What
should you do?

A. 1. Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE container resource.
B. 1. Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE Cluster Operations resource.
C. 1. Go to the GKE console, and delete existing clusters.2. Recreate a new cluster.3. Clear the option to enable legacy Stackdriver Logging.
D. 1. Go to the GKE console, and delete existing clusters.2. Recreate a new cluster.3. Clear the option to enable legacy Stackdriver Monitoring.

Answer: A

Explanation:
https://cloud.google.com/logging/docs/api/v2/resource-list GKE Containers have more log than GKE Cluster Operations:
-GKE Containe:
cluster_name: An immutable name for the cluster the container is running in. namespace_id: Immutable ID of the cluster namespace the container is running in.
instance_id: Immutable ID of the GCE instance the container is running in. pod_id: Immutable ID of the pod the container is running in.
container_name: Immutable name of the container. zone: The GCE zone in which the instance is running. VS -GKE Cluster Operations
project_id: The identifier of the GCP project associated with this resource, such as "my-project". cluster_name: The name of the GKE Cluster.
location: The location in which the GKE Cluster is running.

NEW QUESTION 46
You are using Google Kubernetes Engine with autoscaling enabled to host a new application. You want to expose this new application to the public, using HTTPS
on a public IP address. What should you do?

A. Create a Kubernetes Service of type NodePort for your application, and a Kubernetes Ingress to expose this Service via a Cloud Load Balancer.
B. Create a Kubernetes Service of type ClusterIP for your applicatio
C. Configure the public DNS name of your application using the IP of this Service.
D. Create a Kubernetes Service of type NodePort to expose the application on port 443 of each node of the Kubernetes cluste
E. Configure the public DNS name of your application with the IP of every node of the cluster to achieve load-balancing.
F. Create a HAProxy pod in the cluster to load-balance the traffic to all the pods of the application.Forward the public traffic to HAProxy with an iptable rul
G. Configure the DNS name of your application using the public IP of the node HAProxy is running on.

Answer: A

NEW QUESTION 51
You are running multiple VPC-native Google Kubernetes Engine clusters in the same subnet. The IPs available for the nodes are exhausted, and you want to
ensure that the clusters can grow in nodes when needed. What should you do?

A. Create a new subnet in the same region as the subnet being used.
B. Add an alias IP range to the subnet used by the GKE clusters.
C. Create a new VPC, and set up VPC peering with the existing VPC.
D. Expand the CIDR range of the relevant subnet for the cluster.

Answer: D

Explanation:
gcloud compute networks subnets expand-ip-range NAME gcloud compute networks subnets expand-ip-range
- expand the IP range of a Compute Engine subnetwork https://cloud.google.com/sdk/gcloud/reference/compute/networks/subnets/expand-ip-range

NEW QUESTION 56
You have been asked to create robust Virtual Private Network (VPN) connectivity between a new Virtual Private Cloud (VPC) and a remote site. Key requirements
include dynamic routing, a shared address space of 10.19.0.1/22, and no overprovisioning of tunnels during a failover event. You want to follow
Google-recommended practices to set up a high availability Cloud VPN. What should you do?

A. Use a custom mode VPC network, configure static routes, and use active/passive routing
B. Use an automatic mode VPC network, configure static routes, and use active/active routing
C. Use a custom mode VPC network use Cloud Router border gateway protocol (86P) routes, and use active/passive routing
D. Use an automatic mode VPC network, use Cloud Router border gateway protocol (BGP) routes and configure policy-based routing

Answer: C

Explanation:

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

https://cloud.google.com/network-connectivity/docs/vpn/concepts/best-practices

NEW QUESTION 57
Your auditor wants to view your organization's use of data in Google Cloud. The auditor is most interested in auditing who accessed data in Cloud Storage
buckets. You need to help the auditor access the data they need. What should you do?

A. Assign the appropriate permissions, and then use Cloud Monitoring to review metrics
B. Use the export logs API to provide the Admin Activity Audit Logs in the format they want
C. Turn on Data Access Logs for the buckets they want to audit, and Then build a query in the log viewer that filters on Cloud Storage
D. Assign the appropriate permissions, and then create a Data Studio report on Admin Activity Audit Logs

Answer: C

Explanation:
Types of audit logs Cloud Audit Logs provides the following audit logs for each Cloud project, folder, and organization: Admin Activity audit logs Data Access audit
logs System Event audit logs Policy Denied audit logs ***Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as
user-driven API calls that create, modify, or read user-provided resource data. https://cloud.google.com/logging/docs/audit#types
https://cloud.google.com/logging/docs/audit#data-access Cloud Storage: When Cloud Storage usage logs are enabled, Cloud Storage writes usage data to the
Cloud Storage bucket, which generates Data Access audit logs for the bucket. The generated Data Access audit log has its caller identity redacted.

NEW QUESTION 62
You are creating a Google Kubernetes Engine (GKE) cluster with a cluster autoscaler feature enabled. You need to make sure that each node of the cluster will
run a monitoring pod that sends container metrics to a third-party monitoring solution. What should you do?

A. Deploy the monitoring pod in a StatefulSet object.


B. Deploy the monitoring pod in a DaemonSet object.
C. Reference the monitoring pod in a Deployment object.
D. Reference the monitoring pod in a cluster initializer at the GKE cluster creation time.

Answer: B

Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset#usage_patterns
DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets
automatically add Pods to the new nodes as needed.
In GKE, DaemonSets manage groups of replicated Pods and adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you
add nodes to a node pool, DaemonSets automatically add Pods to the new nodes as needed. So, this is a perfect fit for our monitoring pod.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples
of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons like collectd. For example, you could have
DaemonSets for each type of daemon run on all of your nodes. Alternatively, you could run multiple DaemonSets for a single type of daemon, but have them use
different configurations for different hardware types and resource needs.

NEW QUESTION 64
You are building an application that will run in your data center. The application will use Google Cloud Platform (GCP) services like AutoML. You created a service
account that has appropriate access to AutoML. You need to enable authentication to the APIs from your on-premises environment. What should you do?

A. Use service account credentials in your on-premises application.


B. Use gcloud to create a key file for the service account that has appropriate permissions.
C. Set up direct interconnect between your data center and Google Cloud Platform to enable authentication for your on-premises applications.
D. Go to the IAM & admin console, grant a user account permissions similar to the service account permissions, and use this user account for authentication from
your data center.

Answer: B

NEW QUESTION 67
Your company uses Cloud Storage to store application backup files for disaster recovery purposes. You want to follow Google’s recommended practices. Which
storage option should you use?

A. Multi-Regional Storage
B. Regional Storage
C. Nearline Storage
D. Coldline Storage

Answer: D

NEW QUESTION 71
You are in charge of provisioning access for all Google Cloud users in your organization. Your company recently acquired a startup company that has their own
Google Cloud organization. You need to ensure that your Site Reliability Engineers (SREs) have the same project permissions in the startup company's
organization as in your own organization. What should you do?

A. In the Google Cloud console for your organization, select Create role from selection, and choose destination as the startup company's organization
B. In the Google Cloud console for the startup company, select Create role from selection and choose source as the startup company's Google Cloud organization.
C. Use the gcloud iam roles copy command, and provide the Organization ID of the startup company's Google Cloud Organization as the destination.
D. Use the gcloud iam roles copy command, and provide the project IDs of all projects in the startup company s organization as the destination.

Answer: C

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

Explanation:
https://cloud.google.com/architecture/best-practices-vpc-design#shared-service Cloud VPN is another alternative. Because Cloud VPN establishes reachability
through managed IPsec tunnels, it doesn't have the aggregate limits of VPC Network Peering. Cloud VPN uses a VPN Gateway for connectivity and doesn't
consider the aggregate resource use of the IPsec peer. The drawbacks of Cloud VPN include increased costs (VPN tunnels and traffic egress), management
overhead required to maintain tunnels, and the performance overhead of IPsec.

NEW QUESTION 72
You have a single binary application that you want to run on Google Cloud Platform. You decided to automatically scale the application based on underlying
infrastructure CPU usage. Your organizational policies require you to use virtual machines directly. You need to ensure that the application scaling is operationally
efficient and completed as quickly as possible. What should you do?

A. Create a Google Kubernetes Engine cluster, and use horizontal pod autoscaling to scale the application.
B. Create an instance template, and use the template in a managed instance group with autoscaling configured.
C. Create an instance template, and use the template in a managed instance group that scales up and down based on the time of day.
D. Use a set of third-party tools to build automation around scaling the application up and down, based on Stackdriver CPU usage monitoring.

Answer: B

Explanation:
Managed instance groups offer autoscaling capabilities that let you automatically add or delete instances from a managed instance group based on increases or
decreases in load (CPU Utilization in this case). Autoscaling helps your apps gracefully handle increases in traffic and reduce costs when the need for resources is
lower. You define the autoscaling policy and the autoscaler performs automatic scaling based on the measured load (CPU Utilization in this case). Autoscaling
works by adding more instances to your instance group when there is more load (upscaling), and deleting instances when the need for instances is lowered
(downscaling). Ref: https://cloud.google.com/compute/docs/autoscaler

NEW QUESTION 76
Your managed instance group raised an alert stating that new instance creation has failed to create new instances. You need to maintain the number of running
instances specified by the template to be able to process expected application traffic. What should you do?

A. Create an instance template that contains valid syntax which will be used by the instance grou
B. Delete any persistent disks with the same name as instance names.
C. Create an instance template that contains valid syntax that will be used by the instance grou
D. Verify that the instance name and persistent disk name values are not the same in the template.
E. Verify that the instance template being used by the instance group contains valid synta
F. Delete any persistent disks with the same name as instance name
G. Set the disks.autoDelete property to true in the instance template.
H. Delete the current instance template and replace it with a new instance templat
I. Verify that the instance name and persistent disk name values are not the same in the templat
J. Set the disks.autoDelete property to true in the instance template.

Answer: A

Explanation:
https://cloud.google.com/compute/docs/troubleshooting/troubleshooting-migs https://cloud.google.com/compute/docs/instance-
templates#how_to_update_instance_templates

NEW QUESTION 80
You are building an application that processes data files uploaded from thousands of suppliers. Your primary goals for the application are data security and the
expiration of aged data. You need to design the application to:
•Restrict access so that suppliers can access only their own data.
•Give suppliers write access to data only for 30 minutes.
•Delete data that is over 45 days old.
You have a very short development cycle, and you need to make sure that the application requires minimal maintenance. Which two strategies should you use?
(Choose two.)

A. Build a lifecycle policy to delete Cloud Storage objects after 45 days.


B. Use signed URLs to allow suppliers limited time access to store their objects.
C. Set up an SFTP server for your application, and create a separate user for each supplier.
D. Build a Cloud function that triggers a timer of 45 days to delete objects that have expired.
E. Develop a script that loops through all Cloud Storage buckets and deletes any buckets that are older than 45 days.

Answer: AB

Explanation:
(A) Object Lifecycle Management Delete
The Delete action deletes an object when the object meets all conditions specified in the lifecycle rule.
Exception: In buckets with Object Versioning enabled, deleting the live version of an object causes it to become a noncurrent version, while deleting a noncurrent
version deletes that version permanently.
https://cloud.google.com/storage/docs/lifecycle#delete
(B) Signed URLs
This page provides an overview of signed URLs, which you use to give time-limited resource access to anyone in possession of the URL, regardless of whether
they have a Google account
https://cloud.google.com/storage/docs/access-control/signed-urls

NEW QUESTION 81
You are hosting an application on bare-metal servers in your own data center. The application needs access to Cloud Storage. However, security policies prevent
the servers hosting the application from having public IP addresses or access to the internet. You want to follow Google-recommended practices to provide the
application with access to Cloud Storage. What should you do?

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

A. 1. Use nslookup to get the IP address for storage.googleapis.com.2. Negotiate with the security team to be able to give a public IP address to the servers.3.
Only allow egress traffic from those servers to the IP addresses for storage.googleapis.com.
B. 1. Using Cloud VPN, create a VPN tunnel to a Virtual Private Cloud (VPC) in Google Cloud Platform (GCP).2. In this VPC, create a Compute Engine instance
and install the Squid proxy server on this instance.3. Configure your servers to use that instance as a proxy to access Cloud Storage.
C. 1. Use Migrate for Compute Engine (formerly known as Velostrata) to migrate those servers to Compute Engine.2. Create an internal load balancer (ILB) that
uses storage.googleapis.com as backend.3. Configure your new instances to use this ILB as proxy.
D. 1. Using Cloud VPN or Interconnect, create a tunnel to a VPC in GCP.2. Use Cloud Router to create a custom route advertisement for 199.36.153.4/30.
Announce that network to your on-premises network through the VPN tunnel.3. In your on-premises network, configure your DNS server to
resolve*.googleapis.com as a CNAME to restricted.googleapis.com.

Answer: D

Explanation:
Our requirement is to follow Google recommended practices to achieve the end result. Configuring Private Google Access for On-Premises Hosts is best achieved
by VPN/Interconnect + Advertise Routes + Use restricted Google IP Range.
Using Cloud VPN or Interconnect, create a tunnel to a VPC in GCP
Using Cloud Router to create a custom route advertisement for 199.36.153.4/30. Announce that network to your on-premises network through the VPN tunnel.
In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com is the right answer right, and it
is what Google recommends.
Ref: https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid
You must configure routes so that Google API traffic is forwarded through your Cloud VPN or Cloud Interconnect connection, firewall rules on your on-premises
firewall to allow the outgoing traffic, and DNS so that traffic to Google APIs resolves to the IP range youve added to your routes.
You can use Cloud Router Custom Route Advertisement to announce the Restricted Google APIs IP addresses through Cloud Router to your on-premises
network. The Restricted Google APIs IP range is 199.36.153.4/30. While this is technically a public IP range, Google does not announce it publicly. This IP range
is only accessible to hosts that can reach your Google Cloud projects through internal IP ranges, such as through a Cloud VPN or Cloud Interconnect connection.
Without having a public IP address or access to the internet, the only way you could connect to cloud storage is if you have an internal route to it.
So Negotiate with the security team to be able to give public IP addresses to the servers is not right.
Following Google recommended practices is synonymous with using Googles services (Not quite, but it is at least for the exam !!).
So In this VPC, create a Compute Engine instance and install the Squid proxy server on this instance is not right.
Migrating the VM to Compute Engine is a bit drastic when Google says it is perfectly fine to have Hybrid Connectivity architectures
https://cloud.google.com/hybrid-connectivity.
So,
Use Migrate for Compute Engine (formerly known as Velostrata) to migrate these servers to Compute Engine is not right.

NEW QUESTION 83
Your customer has implemented a solution that uses Cloud Spanner and notices some read latency-related performance issues on one table. This table is
accessed only by their users using a primary key. The table schema is shown below.

You want to resolve the issue. What should you do?

A. Option A
B. Option B
C. Option C
D. Option D

Answer: C

Explanation:

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

As mentioned in Schema and data model, you should be careful when choosing a primary key to not accidentally create hotspots in your database. One cause of
hotspots is having a column whose value monotonically increases as the first key part, because this results in all inserts occurring at the end of your key space.
This pattern is undesirable because Cloud Spanner divides data among servers by key ranges, which means all your inserts will be directed at a single server that
will end up doing all the work. https://cloud.google.com/spanner/docs/schema-design#primary-key-prevent-hotspots

NEW QUESTION 86
You need to create a new billing account and then link it with an existing Google Cloud Platform project. What should you do?

A. Verify that you are Project Billing Manager for the GCP projec
B. Update the existing project to link it to the existing billing account.
C. Verify that you are Project Billing Manager for the GCP projec
D. Create a new billing account and link the new billing account to the existing project.
E. Verify that you are Billing Administrator for the billing accoun
F. Create a new project and link the new project to the existing billing account.
G. Verify that you are Billing Administrator for the billing accoun
H. Update the existing project to link it to the existing billing account.

Answer: B

Explanation:
Billing Administrators can not create a new billing account, and the project is presumably already created. Project Billing Manager allows you to link the created
billing account to the project. It is vague on how the billing account gets created but by process of elimination

NEW QUESTION 89
You have deployed multiple Linux instances on Compute Engine. You plan on adding more instances in the coming weeks. You want to be able to access all of
these instances through your SSH client over me Internet without having to configure specific access on the existing and new instances. You do not want the
Compute Engine instances to have a public IP. What should you do?

A. Configure Cloud Identity-Aware Proxy (or HTTPS resources


B. Configure Cloud Identity-Aware Proxy for SSH and TCP resources.
C. Create an SSH keypair and store the public key as a project-wide SSH Key
D. Create an SSH keypair and store the private key as a project-wide SSH Key

Answer: B

Explanation:
https://cloud.google.com/iap/docs/using-tcp-forwarding

NEW QUESTION 90
You need to enable traffic between multiple groups of Compute Engine instances that are currently running two different GCP projects. Each group of Compute
Engine instances is running in its own VPC. What should you do?

A. Verify that both projects are in a GCP Organizatio


B. Create a new VPC and add all instances.
C. Verify that both projects are in a GCP Organizatio
D. Share the VPC from one project and request that the Compute Engine instances in the other project use this shared VPC.
E. Verify that you are the Project Administrator of both project
F. Create two new VPCs and add all instances.
G. Verify that you are the Project Administrator of both project
H. Create a new VPC and add all instances.

Answer: B

Explanation:
Shared VPC allows an organization to connect resources from multiple projects to a common Virtual Private Cloud (VPC) network, so that they can communicate
with each other securely and efficiently using internal IPs from that network. When you use Shared VPC, you designate a project as a host project and attach one
or more other service projects to it. The VPC networks in the host project are called Shared VPC networks. Eligible resources from service projects can use
subnets in the Shared VPC network
https://cloud.google.com/vpc/docs/shared-vpc
"For example, an existing instance in a service project cannot be reconfigured to use a Shared VPC network, but a new instance can be created to use available
subnets in a Shared VPC network."

NEW QUESTION 92
Your company developed an application to deploy on Google Kubernetes Engine. Certain parts of the
application are not fault-tolerant and are allowed to have downtime Other parts of the application are critical and must always be available. You need to configure a
Goorj e Kubernfl:es Engine duster while optimizing for cost. What should you do?

A. Create a cluster with a single node-pool by using standard VM


B. Label the fault-tolerant Deployments as spot-true.
C. Create a cluster with a single node-pool by using Spot VM
D. Label the critical Deployments as spot-false.
E. Create a cluster with both a Spot W node pool and a rode pool by using standard VMs Deploy the critica
F. deployments on the Spot VM node pool and the fault; tolerant deployments on the node pool by using standard VMs.
G. Create a cluster with both a Spot VM node pool and by using standard VM
H. Deploy the critical deployments on the mode pool by using standard VMs and the fault-tolerant deployments on the Spot VM node pool.

Answer: C

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

NEW QUESTION 96
You are configuring Cloud DNS. You want !to create DNS records to point home.mydomain.com, mydomain.com. and www.mydomain.com to the IP address of
your Google Cloud load balancer. What should you do?

A. Create one CNAME record to point mydomain.com to the load balancer, and create two A records to point WWW and HOME lo mydomain.com respectively.
B. Create one CNAME record to point mydomain.com to the load balancer, and create two AAAA records to point WWW and HOME to mydomain.com
respectively.
C. Create one A record to point mydomain.com to the load balancer, and create two CNAME records to point WWW and HOME to mydomain.com respectively.
D. Create one A record to point mydomain.com lo the load balancer, and create two NS records to point WWW and HOME to mydomain.com respectively.

Answer: C

NEW QUESTION 97
Your company has a 3-tier solution running on Compute Engine. The configuration of the current infrastructure is shown below.

Each tier has a service account that is associated with all instances within it. You need to enable communication on TCP port 8080 between tiers as follows:
• Instances in tier #1 must communicate with tier #2.
• Instances in tier #2 must communicate with tier #3.
What should you do?

A. 1. Create an ingress firewall rule with the following settings:• Targets: all instances• Source filter: IP ranges (with the range set to 10.0.2.0/24)• Protocols: allow
all2. Create an ingress firewall rule with the following settings:• Targets: all instances• Source filter: IP ranges (with the range set to 10.0.1.0/24)•Protocols: allow
all
B. 1. Create an ingress firewall rule with the following settings:• Targets: all instances with tier #2 service account• Source filter: all instances with tier #1 service
account• Protocols: allow TCP:80802. Create an ingress firewall rule with the following settings:• Targets: all instances with tier #3 service account• Source filter:
all instances with tier #2 service account• Protocols: allow TCP: 8080
C. 1. Create an ingress firewall rule with the following settings:• Targets: all instances with tier #2 service account• Source filter: all instances with tier #1 service
account• Protocols: allow all2. Create an ingress firewall rule with the following settings:• Targets: all instances with tier #3 service account• Source filter: all
instances with tier #2 service account• Protocols: allow all
D. 1. Create an egress firewall rule with the following settings:• Targets: all instances• Source filter: IP ranges (with the range set to 10.0.2.0/24)• Protocols: allow
TCP: 80802. Create an egress firewall rule with the following settings:• Targets: all instances• Source filter: IP ranges (with the range set to 10.0.1.0/24)•
Protocols: allow TCP: 8080

Answer: B

Explanation:
* 1. Create an ingress firewall rule with the following settings: "¢ Targets: all instances with tier #2 service account "¢ Source filter: all instances with tier #1 service
account "¢ Protocols: allow TCP:8080 2. Create an ingress firewall rule with the following settings: "¢ Targets: all instances with tier #3 service account "¢ Source
filter: all instances with tier #2 service account "¢ Protocols: allow TCP: 8080

NEW QUESTION 101


You need to provide a cost estimate for a Kubernetes cluster using the GCP pricing calculator for Kubernetes. Your workload requires high IOPs, and you will also
be using disk snapshots. You start by entering the number of nodes, average hours, and average days. What should you do next?

A. Fill in local SS

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

B. Fill in persistent disk storage and snapshot storage.


C. Fill in local SS
D. Add estimated cost for cluster management.
E. Select Add GPU
F. Fill in persistent disk storage and snapshot storage.
G. Select Add GPU
H. Add estimated cost for cluster management.

Answer: A

Explanation:
https://cloud.google.com/compute/docs/disks/local-ssd

NEW QUESTION 105


You manage three Google Cloud projects with the Cloud Monitoring API enabled. You want to follow Google-recommended practices to visualize CPU and
network metrics for all three projects together. What should you do?

A. * 1. Create a Cloud Monitoring Dashboard* 2. Collect metrics and publish them into the Pub/Sub topics 3. Add CPU and network Charts (or each of (he three
projects
B. * 1. Create a Cloud Monitoring Dashboard.* 2. Select the CPU and Network metrics from the three projects.* 3. Add CPU and network Charts lot each of the
three protects.
C. * 1 Create a Service Account and apply roles/viewer on the three projects* 2. Collect metrics and publish them lo the Cloud Monitoring API* 3. Add CPU and
network Charts for each of the three projects.
D. * 1. Create a fourth Google Cloud project* 2 Create a Cloud Workspace from the fourth project and add the other three projects

Answer: B

NEW QUESTION 109


You are configuring service accounts for an application that spans multiple projects. Virtual machines (VMs) running in the web-applications project need access to
BigQuery datasets in crm-databases-proj. You want to follow Google-recommended practices to give access to the service account in the web-applications project.
What should you do?

A. Give “project owner” for web-applications appropriate roles to crm-databases- proj


B. Give “project owner” role to crm-databases-proj and the web-applications project.
C. Give “project owner” role to crm-databases-proj and bigquery.dataViewer role to web-applications.
D. Give bigquery.dataViewer role to crm-databases-proj and appropriate roles to web-applications.

Answer: C

NEW QUESTION 112


You want to configure an SSH connection to a single Compute Engine instance for users in the dev1 group. This instance is the only resource in this particular
Google Cloud Platform project that the dev1 users should be able to connect to. What should you do?

A. Set metadata to enable-oslogin=true for the instanc


B. Grant the dev1 group the compute.osLogin role.Direct them to use the Cloud Shell to ssh to that instance.
C. Set metadata to enable-oslogin=true for the instanc
D. Set the service account to no service account for that instanc
E. Direct them to use the Cloud Shell to ssh to that instance.
F. Enable block project wide keys for the instanc
G. Generate an SSH key for each user in the dev1 group.Distribute the keys to dev1 users and direct them to use their third-party tools to connect.
H. Enable block project wide keys for the instanc
I. Generate an SSH key and associate the key with that instanc
J. Distribute the key to dev1 users and direct them to use their third-party tools to connect.

Answer: A

NEW QUESTION 115


You recently received a new Google Cloud project with an attached billing account where you will work. You need to create instances, set firewalls, and store data
in Cloud Storage. You want to follow
Google-recommended practices. What should you do?

A. Use the gcloud CLI services enable cloudresourcemanager.googleapis.com command to enable all resources.
B. Use the gcloud services enable compute.googleapis.com command to enable Compute Engine and the gcloud services enable storage-api.googleapis.com
command to enable the Cloud Storage APIs.
C. Open the Google Cloud console and enable all Google Cloud APIs from the API dashboard.
D. Open the Google Cloud console and run gcloud init --project <project-id> in a Cloud Shell.

Answer: B

NEW QUESTION 116


You have two subnets (subnet-a and subnet-b) in the default VPC. Your database servers are running in subnet-a. Your application servers and web servers are
running in subnet-b. You want to configure a firewall rule that only allows database traffic from the application servers to the database servers. What should you
do?

A. * Create service accounts sa-app and sa-db.• Associate service account: sa-app with the application servers and the service account sa-db with the database
servers.• Create an ingress firewall rule to allow network traffic from source service account sa-app to target service account sa-db.
B. • Create network tags app-server and db-server.• Add the app-server lag lo the application servers and the db-server lag to the database servers.• Create an
egress firewall rule to allow network traffic from source network tag app-server to target network tag db-server.

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

C. * Create a service account sa-app and a network tag db-server.* Associate the service account sa-app with the application servers and the network tag db-
server with the database servers.• Create an ingress firewall rule to allow network traffic from source VPC IP addresses and target the subnet-a IP addresses.
D. • Create a network lag app-server and service account sa-db.• Add the tag to the application servers and associate the service account with the database
servers.• Create an egress firewall rule to allow network traffic from source network tag app-server to target service account sa-db.

Answer: C

NEW QUESTION 121


You are analyzing Google Cloud Platform service costs from three separate projects. You want to use this information to create service cost estimates by service
type, daily and monthly, for the next six months using standard query syntax. What should you do?

A. Export your bill to a Cloud Storage bucket, and then import into Cloud Bigtable for analysis.
B. Export your bill to a Cloud Storage bucket, and then import into Google Sheets for analysis.
C. Export your transactions to a local file, and perform analysis with a desktop tool.
D. Export your bill to a BigQuery dataset, and then write time window-based SQL queries for analysis.

Answer: D

Explanation:
"...we recommend that you enable Cloud Billing data export to BigQuery at the same time that you create a Cloud Billing account. "
https://cloud.google.com/billing/docs/how-to/export-data-bigquery
https://medium.com/google-cloud/analyzing-google-cloud-billing-data-with-big-query-30bae1c2aae4

NEW QUESTION 123


You are managing a project for the Business Intelligence (BI) department in your company. A data pipeline ingests data into BigQuery via streaming. You want the
users in the BI department to be able to run the custom SQL queries against the latest data in BigQuery. What should you do?

A. Create a Data Studio dashboard that uses the related BigQuery tables as a source and give the BI team view access to the Data Studio dashboard.
B. Create a Service Account for the BI team and distribute a new private key to each member of the BI team.
C. Use Cloud Scheduler to schedule a batch Dataflow job to copy the data from BigQuery to the BI team's internal data warehouse.
D. Assign the IAM role of BigQuery User to a Google Group that contains the members of the BI team.

Answer: D

Explanation:
When applied to a dataset, this role provides the ability to read the dataset's metadata and list tables in the dataset. When applied to a project, this role also
provides the ability to run jobs, including queries, within the project. A member with this role can enumerate their own jobs, cancel their own jobs, and enumerate
datasets within a project. Additionally, allows the creation of new datasets within the project; the creator is granted the BigQuery Data Owner role
(roles/bigquery.dataOwner) on these new datasets.
https://cloud.google.com/bigquery/docs/access-control

NEW QUESTION 126


You are developing a financial trading application that will be used globally. Data is stored and queried using a relational structure, and clients from all over the
world should get the exact identical state of the data. The application will be deployed in multiple regions to provide the lowest latency to end users. You need to
select a storage option for the application data while minimizing latency. What should you do?

A. Use Cloud Bigtable for data storage.


B. Use Cloud SQL for data storage.
C. Use Cloud Spanner for data storage.
D. Use Firestore for data storage.

Answer: C

Explanation:
Keywords, Financial data (large data) used globally, data stored and queried using relational structure (SQL), clients should get exact identical copies(Strong
Consistency), Multiple region, low latency to end user, select storage option to minimize latency.

NEW QUESTION 127


You deployed an application on a managed instance group in Compute Engine. The application accepts Transmission Control Protocol (TCP) traffic on port 389
and requires you to preserve the IP address of the client who is making a request. You want to expose the application to the internet by using a load balancer.
What should you do?

A. Expose the application by using an external TCP Network Load Balancer.


B. Expose the application by using a TCP Proxy Load Balancer.
C. Expose the application by using an SSL Proxy Load Balancer.
D. Expose the application by using an internal TCP Network Load Balancer.

Answer: B

NEW QUESTION 130


Your company uses BigQuery for data warehousing. Over time, many different business units in your company have created 1000+ datasets across hundreds of
projects. Your CIO wants you to examine all datasets to find tables that contain an employee_ssn column. You want to minimize effort in performing this task.
What should you do?

A. Go to Data Catalog and search for employee_ssn in the search box.


B. Write a shell script that uses the bq command line tool to loop through all the projects in your organization.
C. Write a script that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find the employee_ssn
column.

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

D. Write a Cloud Dataflow job that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find
employee_ssn column.

Answer: A

Explanation:
https://cloud.google.com/bigquery/docs/quickstarts/quickstart-web-ui?authuser=4

NEW QUESTION 133


You are running a web application on Cloud Run for a few hundred users. Some of your users complain that the initial web page of the application takes much
longer to load than the following pages. You want to follow Google's recommendations to mitigate the issue. What should you do?

A. Update your web application to use the protocol HTTP/2 instead of HTTP/1.1
B. Set the concurrency number to 1 for your Cloud Run service.
C. Set the maximum number of instances for your Cloud Run service to 100.
D. Set the minimum number of instances for your Cloud Run service to 3.

Answer: D

NEW QUESTION 136


You are managing several Google Cloud Platform (GCP) projects and need access to all logs for the past 60 days. You want to be able to explore and quickly
analyze the log contents. You want to follow Google- recommended practices to obtain the combined logs for all projects. What should you do?

A. Navigate to Stackdriver Logging and select resource.labels.project_id="*"


B. Create a Stackdriver Logging Export with a Sink destination to a BigQuery datase
C. Configure the table expiration to 60 days.
D. Create a Stackdriver Logging Export with a Sink destination to Cloud Storag
E. Create a lifecycle rule to delete objects after 60 days.
F. Configure a Cloud Scheduler job to read from Stackdriver and store the logs in BigQuer
G. Configure the table expiration to 60 days.

Answer: B

Explanation:
Navigate to Stackdriver Logging and select resource.labels.project_id=*. is not right.
Log entries are held in Stackdriver Logging for a limited time known as the retention period which is 30 days (default configuration). After that, the entries are
deleted. To keep log entries longer, you need to export them outside of Stackdriver Logging by configuring log sinks.
Ref: https://cloud.google.com/blog/products/gcp/best-practices-for-working-with-google-cloud-audit-logging Configure a Cloud Scheduler job to read from
Stackdriver and store the logs in BigQuery. Configure the table expiration to 60 days. is not right.
While this works, it makes no sense to use Cloud Scheduler job to read from Stackdriver and store the logs in BigQuery when Google provides a feature (export
sinks) that does exactly the same thing and works out of the box.Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Create a Stackdriver Logging Export with a Sink destination to Cloud Storage. Create a lifecycle rule to delete objects after 60 days. is not right.
You can export logs by creating one or more sinks that include a logs query and an export destination. Supported destinations for exported log entries are Cloud
Storage, BigQuery, and
Pub/Sub.Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Sinks are limited to exporting log entries from the exact resource in which the sink was created: a Google Cloud project, organization, folder, or billing account. If it
makes it easier to exporting from all projects of an organication, you can create an aggregated sink that can export log entries from all the projects, folders, and
billing accounts of a Google Cloud
organization.Ref: https://cloud.google.com/logging/docs/export/aggregated_sinks
Either way, we now have the data in Cloud Storage, but querying logs information from Cloud Storage is harder than Querying information from BigQuery dataset.
For this reason, we should prefer Big Query over Cloud Storage.
Create a Stackdriver Logging Export with a Sink destination to a BigQuery dataset. Configure the table expiration to 60 days. is the right answer.
You can export logs by creating one or more sinks that include a logs query and an export destination. Supported destinations for exported log entries are Cloud
Storage, BigQuery, and
Pub/Sub.Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Sinks are limited to exporting log entries from the exact resource in which the sink was created: a Google Cloud project, organization, folder, or billing account. If it
makes it easier to exporting from all projects of an organication, you can create an aggregated sink that can export log entries from all the projects, folders, and
billing accounts of a Google Cloud
organization.Ref: https://cloud.google.com/logging/docs/export/aggregated_sinks
Either way, we now have the data in a BigQuery Dataset. Querying information from a Big Query dataset is easier and quicker than analyzing contents in Cloud
Storage bucket. As our requirement is to Quickly analyze the log contents, we should prefer Big Query over Cloud Storage.
Also, You can control storage costs and optimize storage usage by setting the default table expiration for newly created tables in a dataset. If you set the property
when the dataset is created, any table created in the dataset is deleted after the expiration period. If you set the property after the dataset is created, only new
tables are deleted after the expiration period.For example, if you set the default table expiration to 7 days, older data is automatically deleted after 1 week.Ref:
https://cloud.google.com/bigquery/docs/best-practices-storage

NEW QUESTION 139


You are designing an application that uses WebSockets and HTTP sessions that are not distributed across the web servers. You want to ensure the application
runs properly on Google Cloud Platform. What should you do?

A. Meet with the cloud enablement team to discuss load balancer options.
B. Redesign the application to use a distributed user session service that does not rely on WebSockets and HTTP sessions.
C. Review the encryption requirements for WebSocket connections with the security team.
D. Convert the WebSocket code to use HTTP streaming.

Answer: A

Explanation:

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

Google HTTP(S) Load Balancing has native support for the WebSocket protocol when you use HTTP or HTTPS, not HTTP/2, as the protocol to the backend.
Ref: https://cloud.google.com/load-balancing/docs/https#websocket_proxy_support
We dont need to convert WebSocket code to use HTTP streaming or Redesign the application, as
WebSocket support is offered by Google HTTP(S) Load Balancing. Reviewing the encryption requirements is a good idea but it has nothing to do with
WebSockets.

NEW QUESTION 143


You want to find out when users were added to Cloud Spanner Identity Access Management (IAM) roles on your Google Cloud Platform (GCP) project. What
should you do in the GCP Console?

A. Open the Cloud Spanner console to review configurations.


B. Open the IAM & admin console to review IAM policies for Cloud Spanner roles.
C. Go to the Stackdriver Monitoring console and review information for Cloud Spanner.
D. Go to the Stackdriver Logging console, review admin activity logs, and filter them for Cloud Spanner IAM roles.

Answer: D

Explanation:
https://cloud.google.com/monitoring/audit-logging

NEW QUESTION 145


You have one GCP account running in your default region and zone and another account running in a
non-default region and zone. You want to start a new Compute Engine instance in these two Google Cloud Platform accounts using the command line interface.
What should you do?

A. Create two configurations using gcloud config configurations create [NAME]. Run gcloud config configurations activate [NAME] to switch between accounts
when running the commands to start the Compute Engine instances.
B. Create two configurations using gcloud config configurations create [NAME]. Run gcloud configurations list to start the Compute Engine instances.
C. Activate two configurations using gcloud configurations activate [NAME]. Run gcloud config list to start the Compute Engine instances.
D. Activate two configurations using gcloud configurations activate [NAME]. Run gcloud configurations list to start the Compute Engine instances.

Answer: A

Explanation:
"Run gcloud configurations list to start the Compute Engine instances". How the heck are you expecting to "start" GCE instances doing "configuration list".
Each gcloud configuration has a 1 to 1 relationship with the region (if a region is defined). Since we have two different regions, we would need to create two
separate configurations using gcloud config configurations createRef: https://cloud.google.com/sdk/gcloud/reference/config/configurations/create
Secondly, you can activate each configuration independently by running gcloud config configurations activate [NAME]Ref:
https://cloud.google.com/sdk/gcloud/reference/config/configurations/activate
Finally, while each configuration is active, you can run the gcloud compute instances start [NAME] command to start the instance in the configurations
region.https://cloud.google.com/sdk/gcloud/reference/compute/instances/start

NEW QUESTION 149


You have an application that uses Cloud Spanner as a backend database. The application has a very predictable traffic pattern. You want to automatically scale up
or down the number of Spanner nodes depending on traffic. What should you do?

A. Create a cron job that runs on a scheduled basis to review stackdriver monitoring metrics, and then resize the Spanner instance accordingly.
B. Create a Stackdriver alerting policy to send an alert to oncall SRE emails when Cloud Spanner CPU exceeds the threshol
C. SREs would scale resources up or down accordingly.
D. Create a Stackdriver alerting policy to send an alert to Google Cloud Support email when Cloud Spanner CPU exceeds your threshol
E. Google support would scale resources up or down accordingly.
F. Create a Stackdriver alerting policy to send an alert to webhook when Cloud Spanner CPU is over or under your threshol
G. Create a Cloud Function that listens to HTTP and resizes Spanner resources accordingly.

Answer: D

Explanation:
As to mexblood1's point, CPU utilization is a recommended proxy for traffic when it comes to Cloud Spanner. See: Alerts for high CPU utilization The following
table specifies our recommendations for maximum CPU usage for both single-region and multi-region instances. These numbers are to ensure that your instance
has enough compute capacity to continue to serve your traffic in the event of the loss of an entire zone (for single-region instances) or an entire region (for multi-
region instances). - https://cloud.google.com/spanner/docs/cpu-utilization

NEW QUESTION 150


You need to deploy an application in Google Cloud using savorless technology. You want to test a new version of the application with a small percentage of
production traffic. What should you do?

A. Deploy the application lo Clou


B. Ru
C. Use gradual rollouts for traffic spelling.
D. Deploy the application lo Google Kubemetes Engin
E. Use Anthos Service Mesh for traffic splitting.
F. Deploy the application to Cloud function
G. Saucily the version number in the functions name.
H. Deploy the application to App Engin
I. For each new version, create a new service.

Answer: A

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

NEW QUESTION 154


You have an application that uses Cloud Spanner as a database backend to keep current state information about users. Cloud Bigtable logs all events triggered by
users. You export Cloud Spanner data to Cloud Storage during daily backups. One of your analysts asks you to join data from Cloud Spanner and Cloud Bigtable
for specific users. You want to complete this ad hoc request as efficiently as possible. What should you do?

A. Create a dataflow job that copies data from Cloud Bigtable and Cloud Storage for specific users.
B. Create a dataflow job that copies data from Cloud Bigtable and Cloud Spanner for specific users.
C. Create a Cloud Dataproc cluster that runs a Spark job to extract data from Cloud Bigtable and Cloud Storage for specific users.
D. Create two separate BigQuery external tables on Cloud Storage and Cloud Bigtabl
E. Use the BigQuery console to join these tables through user fields, and apply appropriate filters.

Answer: D

Explanation:
"The Cloud Spanner to Cloud Storage Text template is a batch pipeline that reads in data from a Cloud
Spanner table, optionally transforms the data via a JavaScript User Defined Function (UDF) that you provide, and writes it to Cloud Storage as CSV text files."
https://cloud.google.com/dataflow/docs/guides/templates/provided-batch#cloudspannertogcstext
"The Dataflow connector for Cloud Spanner lets you read data from and write data to Cloud Spanner in a Dataflow pipeline"
https://cloud.google.com/spanner/docs/dataflow-connector https://cloud.google.com/bigquery/external-data-sources

NEW QUESTION 158


You are migrating a production-critical on-premises application that requires 96 vCPUs to perform its task. You want to make sure the application runs in a similar
environment on GCP. What should you do?

A. When creating the VM, use machine type n1-standard-96.


B. When creating the VM, use Intel Skylake as the CPU platform.
C. Create the VM using Compute Engine default setting
D. Use gcloud to modify the running instance to have 96 vCPUs.
E. Start the VM using Compute Engine default settings, and adjust as you go based on Rightsizing Recommendations.

Answer: A

Explanation:
Ref: https://cloud.google.com/compute/docs/machine-types#n1_machine_type

NEW QUESTION 162


You need a dynamic way of provisioning VMs on Compute Engine. The exact specifications will be in a dedicated configuration file. You want to follow Google’s
recommended practices. Which method should you use?

A. Deployment Manager
B. Cloud Composer
C. Managed Instance Group
D. Unmanaged Instance Group

Answer: A

Explanation:
https://cloud.google.com/deployment-manager/docs/configuration/create-basic-configuration

NEW QUESTION 167


You have a large 5-TB AVRO file stored in a Cloud Storage bucket. Your analysts are proficient only in SQL and need access to the data stored in this file. You
want to find a cost-effective way to complete their request as soon as possible. What should you do?

A. Load data in Cloud Datastore and run a SQL query against it.
B. Create a BigQuery table and load data in BigQuer
C. Run a SQL query on this table and drop this table after you complete your request.
D. Create external tables in BigQuery that point to Cloud Storage buckets and run a SQL query on these external tables to complete your request.
E. Create a Hadoop cluster and copy the AVRO file to NDFS by compressing i
F. Load the file in a hive table and provide access to your analysts so that they can run SQL queries.

Answer: C

Explanation:
https://cloud.google.com/bigquery/external-data-sources
An external data source is a data source that you can query directly from BigQuery, even though the data is not stored in BigQuery storage.
BigQuery supports the following external data sources: Amazon S3
Azure Storage Cloud Bigtable Cloud Spanner Cloud SQL Cloud Storage
Drive

NEW QUESTION 170


You have an instance group that you want to load balance. You want the load balancer to terminate the client SSL session. The instance group is used to serve a
public web application over HTTPS. You want to follow Google-recommended practices. What should you do?

A. Configure an HTTP(S) load balancer.


B. Configure an internal TCP load balancer.
C. Configure an external SSL proxy load balancer.
D. Configure an external TCP proxy load balancer.

Answer:

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

NEW QUESTION 172


You are operating a Google Kubernetes Engine (GKE) cluster for your company where different teams can run non-production workloads. Your Machine Learning
(ML) team needs access to Nvidia Tesla P100 GPUs to train their models. You want to minimize effort and cost. What should you do?

A. Ask your ML team to add the “accelerator: gpu” annotation to their pod specification.
B. Recreate all the nodes of the GKE cluster to enable GPUs on all of them.
C. Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPU
D. Dedicate this cluster to your ML team.
E. Add a new, GPU-enabled, node pool to the GKE cluste
F. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.

Answer: D

Explanation:
This is the most optimal solution. Rather than recreating all nodes, you create a new node pool with GPU enabled. You then modify the pod specification to target
particular GPU types by adding node selector to your workloads Pod specification. YOu still have a single cluster so you pay Kubernetes cluster management fee
for just one cluster thus minimizing the
cost.Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/gpusRef: https://cloud.google.com/kubern
Example:
apiVersion: v1
kind: Pod
metadata:
name: my-gpu-pod
spec:
containers:
name: my-gpu-container
image: nvidia/cuda:10.0-runtime-ubuntu18.04
command: [/bin/bash]
resources:
limits:
nvidia.com/gpu: 2
nodeSelector:
cloud.google.com/gke-accelerator: nvidia-tesla-k80 # or nvidia-tesla-p100 or nvidia-tesla-p4 or nvidia-tesla-v100 or nvidia-tesla-t4

NEW QUESTION 177


You create a Deployment with 2 replicas in a Google Kubernetes Engine cluster that has a single preemptible node pool. After a few minutes, you use kubectl to
examine the status of your Pod and observe that one of them is still in Pending status:

What is the most likely cause?

A. The pending Pod's resource requests are too large to fit on a single node of the cluster.
B. Too many Pods are already running in the cluster, and there are not enough resources left to schedule the pending Pod.
C. The node pool is configured with a service account that does not have permission to pull the container image used by the pending Pod.
D. The pending Pod was originally scheduled on a node that has been preempted between the creation of the Deployment and your verification of the Pods’ statu
E. It is currently being rescheduled on a new node.

Answer: B

Explanation:
The pending Pods resource requests are too large to fit on a single node of the cluster. Too many Pods are already running in the cluster, and there are not
enough resources left to schedule the pending Pod. is the right answer.
When you have a deployment with some pods in running and other pods in the pending state, more often than not it is a problem with resources on the nodes.
Heres a sample output of this use case. We see that the problem is with insufficient CPU on the Kubernetes nodes so we have to either enable auto-scaling or
manually scale up the nodes.

NEW QUESTION 180


You have an application that receives SSL-encrypted TCP traffic on port 443. Clients for this application are located all over the world. You want to minimize
latency for the clients. Which load balancing option should you use?

A. HTTPS Load Balancer


B. Network Load Balancer
C. SSL Proxy Load Balancer
D. Internal TCP/UDP Load Balance
E. Add a firewall rule allowing ingress traffic from 0.0.0.0/0 on the target instances.

Answer: C

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

NEW QUESTION 182


You want to select and configure a cost-effective solution for relational data on Google Cloud Platform. You are working with a small set of operational data in one
geographic location. You need to support point-in-time recovery. What should you do?

A. Select Cloud SQL (MySQL). Verify that the enable binary logging option is selected.
B. Select Cloud SQL (MySQL). Select the create failover replicas option.
C. Select Cloud Spanne
D. Set up your instance with 2 nodes.
E. Select Cloud Spanne
F. Set up your instance as multi-regional.

Answer: A

NEW QUESTION 183


You are building a product on top of Google Kubernetes Engine (GKE). You have a single GKE cluster. For each of your customers, a Pod is running in that
cluster, and your customers can run arbitrary code inside their Pod. You want to maximize the isolation between your customers’ Pods. What should you do?

A. Use Binary Authorization and whitelist only the container images used by your customers’ Pods.
B. Use the Container Analysis API to detect vulnerabilities in the containers used by your customers’ Pods.
C. Create a GKE node pool with a sandbox type configured to gviso
D. Add the parameter runtimeClassName: gvisor to the specification of your customers’ Pods.
E. Use the cos_containerd image for your GKE node
F. Add a nodeSelector with the value cloud.google.com/gke-os-distribution: cos_containerd to the specification of your customers’ Pods.

Answer: C

NEW QUESTION 186


The storage costs for your application logs have far exceeded the project budget. The logs are currently being retained indefinitely in the Cloud Storage bucket
myapp-gcp-ace-logs. You have been asked to remove logs older than 90 days from your Cloud Storage bucket. You want to optimize ongoing Cloud Storage
spend. What should you do?

A. Write a script that runs gsutil Is -| – gs://myapp-gcp-ace-logs/** to find and remove items older than 90 day
B. Schedule the script with cron.
C. Write a lifecycle management rule in JSON and push it to the bucket with gsutil lifecycle set config-json-file.
D. Write a lifecycle management rule in XML and push it to the bucket with gsutil lifecycle set config-xml-file.
E. Write a script that runs gsutil Is -Ir gs://myapp-gcp-ace-logs/** to find and remove items older than 90 day
F. Repeat this process every morning.

Answer: B

Explanation:
You write a lifecycle management rule in XML and push it to the bucket with gsutil lifecycle set config-xml-file. is not right.
gsutil lifecycle set enables you to set the lifecycle configuration on one or more buckets based on the configuration file provided. However, XML is not a valid
supported type for the configuration file.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/lifecycle
Write a script that runs gsutil ls -lr gs://myapp-gcp-ace-logs/** to find and remove items older than 90 days. Repeat this process every morning. is not right.
This manual approach is error-prone, time-consuming and expensive. GCP Cloud Storage provides lifecycle management rules that let you achieve this with
minimal effort.
Write a script that runs gsutil ls -l gs://myapp-gcp-ace-logs/** to find and remove items older than 90 days. Schedule the script with cron. is not right.
This manual approach is error-prone, time-consuming and expensive. GCP Cloud Storage provides lifecycle management rules that let you achieve this with
minimal effort.
Write a lifecycle management rule in JSON and push it to the bucket with gsutil lifecycle set config-json-file. is the right answer.
You can assign a lifecycle management configuration to a bucket. The configuration contains a set of rules which apply to current and future objects in the bucket.
When an object meets the criteria of one of the rules, Cloud Storage automatically performs a specified action on the object. One of the supported actions is to
Delete objects. You can set up a lifecycle management to delete objects older than 90 days. gsutil lifecycle set enables you to set the lifecycle configuration on the
bucket based on the configuration file. JSON is the only supported type for the configuration file. The config-json-file specified on the command line should be a
path to a local file containing the lifecycle configuration JSON document.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/lifecycle Ref: https://cloud.google.com/storage/docs/lifecycle

NEW QUESTION 189


You created several resources in multiple Google Cloud projects. All projects are linked to different billing accounts. To better estimate future charges, you want to
have a single visual representation of all costs incurred. You want to include new cost data as soon as possible. What should you do?

A. Configure Billing Data Export to BigQuery and visualize the data in Data Studio.
B. Visit the Cost Table page to get a CSV export and visualize it using Data Studio.
C. Fill all resources in the Pricing Calculator to get an estimate of the monthly cost.
D. Use the Reports view in the Cloud Billing Console to view the desired cost information.

Answer: A

Explanation:
https://cloud.google.com/billing/docs/how-to/export-data-bigquery "Cloud Billing export to BigQuery enables you to export detailed Google Cloud billing data (such
as usage, cost estimates, and pricing data) automatically throughout the day to a BigQuery dataset that you specify."

NEW QUESTION 193


You have just created a new project which will be used to deploy a globally distributed application. You will use Cloud Spanner for data storage. You want to create
a Cloud Spanner instance. You want to perform the first step in preparation of creating the instance. What should you do?

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

A. Grant yourself the IAM role of Cloud Spanner Admin


B. Create a new VPC network with subnetworks in all desired regions
C. Configure your Cloud Spanner instance to be multi-regional
D. Enable the Cloud Spanner API

Answer: A

Explanation:
https://cloud.google.com/spanner/docs/getting-started/set-up

NEW QUESTION 194


Your organization uses G Suite for communication and collaboration. All users in your organization have a G Suite account. You want to grant some G Suite users
access to your Cloud Platform project. What should you do?

A. Enable Cloud Identity in the GCP Console for your domain.


B. Grant them the required IAM roles using their G Suite email address.
C. Create a CSV sheet with all users’ email addresse
D. Use the gcloud command line tool to convert them into Google Cloud Platform accounts.
E. In the G Suite console, add the users to a special group called cloud-console-users@yourdomain.com.Rely on the default behavior of the Cloud Platform to
grant users access if they are members of this group.

Answer: B

NEW QUESTION 199


You have deployed an application on a single Compute Engine instance. The application writes logs to disk. Users start reporting errors with the application. You
want to diagnose the problem. What should you do?

A. Navigate to Cloud Logging and view the application logs.


B. Connect to the instance’s serial console and read the application logs.
C. Configure a Health Check on the instance and set a Low Healthy Threshold value.
D. Install and configure the Cloud Logging Agent and view the logs from Cloud Logging.

Answer: D

NEW QUESTION 203


You want to select and configure a solution for storing and archiving data on Google Cloud Platform. You need to support compliance objectives for data from one
geographic location. This data is archived after 30 days and needs to be accessed annually. What should you do?

A. Select Multi-Regional Storag


B. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage.
C. Select Multi-Regional Storag
D. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage.
E. Select Regional Storag
F. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage.
G. Select Regional Storag
H. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage.

Answer: D

Explanation:
Google Cloud Coldline is a new cold-tier storage for archival data with access frequency of less than once per year. Unlike other cold storage options, Nearline has
no delays prior to data access, so now it is the leading solution among competitors.
The Real description is about Coldline storage Class: Coldline Storage
Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is a better choice than Standard
Storage or Nearline Storage in scenarios where slightly lower availability, a 90-day minimum storage duration, and higher costs for data access are acceptable
trade-offs for lowered at-rest storage costs.
Coldline Storage is ideal for data you plan to read or modify at most once a quarter. Note, however, that for data being kept entirely for backup or archiving
purposes, Archive Storage is more cost-effective, as it offers the lowest storage costs.
https://cloud.google.com/storage/docs/storage-classes#coldline

NEW QUESTION 208


You have a number of applications that have bursty workloads and are heavily dependent on topics to decouple publishing systems from consuming systems.
Your company would like to go serverless to enable developers to focus on writing code without worrying about infrastructure. Your solution architect has already
identified Cloud Pub/Sub as a suitable alternative for decoupling systems. You have been asked to identify a suitable GCP Serverless service that is easy to use
with Cloud Pub/Sub. You want the ability to scale down to zero when there is no traffic in order to minimize costs. You want to follow Google recommended
practices. What should you suggest?

A. Cloud Run for Anthos


B. Cloud Run
C. App Engine Standard
D. Cloud Functions.

Answer: D

Explanation:
Cloud Functions is Google Cloud’s event-driven serverless compute platform that lets you run your code locally or in the cloud without having to provision servers.
Cloud Functions scales up or down, so you pay only for compute resources you use. Cloud Functions have excellent integration with Cloud Pub/Sub, lets you
scale down to zero and is recommended by Google as the ideal serverless platform to use when dependent on Cloud Pub/Sub."If you’re building a simple API (a
small set of functions to be accessed via HTTP or Cloud Pub/Sub), we recommend using Cloud Functions."Ref: https://cloud.google.com/serverless-options

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

NEW QUESTION 210


You are creating an application that will run on Google Kubernetes Engine. You have identified MongoDB as the most suitable database system for your
application and want to deploy a managed MongoDB environment that provides a support SLA. What should you do?

A. Create a Cloud Bigtable cluster and use the HBase API


B. Deploy MongoDB Alias from the Google Cloud Marketplace
C. Download a MongoDB installation package and run it on Compute Engine instances
D. Download a MongoDB installation package, and run it on a Managed Instance Group

Answer: B

Explanation:
https://console.cloud.google.com/marketplace/details/gc-launcher-for-mongodb-atlas/mongodb-atlas

NEW QUESTION 211


Your company has an existing GCP organization with hundreds of projects and a billing account. Your company recently acquired another company that also has
hundreds of projects and its own billing account. You would like to consolidate all GCP costs of both GCP organizations onto a single invoice. You would like to
consolidate all costs as of tomorrow. What should you do?

A. Link the acquired company’s projects to your company's billing account.


B. Configure the acquired company's billing account and your company's billing account to export the billing data into the same BigQuery dataset.
C. Migrate the acquired company’s projects into your company’s GCP organizatio
D. Link the migrated projects to your company's billing account.
E. Create a new GCP organization and a new billing accoun
F. Migrate the acquired company's projects and your company's projects into the new GCP organization and link the projects to the new billing account.

Answer: A

Explanation:
https://cloud.google.com/resource-manager/docs/project-migration#oauth_consent_screen https://cloud.google.com/resource-manager/docs/project-migration

NEW QUESTION 215


You are working for a startup that was officially registered as a business 6 months ago. As your customer base grows, your use of Google Cloud increases. You
want to allow all engineers to create new projects without asking them for their credit card information. What should you do?

A. Create a Billing account, associate a payment method with it, and provide all project creators with permission to associate that billing account with their projects.
B. Grant all engineer’s permission to create their own billing accounts for each new project.
C. Apply for monthly invoiced billing, and have a single invoice tor the project paid by the finance team.
D. Create a billing account, associate it with a monthly purchase order (PO), and send the PO to Google Cloud.

Answer: A

NEW QUESTION 219


You need to manage multiple Google Cloud Platform (GCP) projects in the fewest steps possible. You want to configure the Google Cloud SDK command line
interface (CLI) so that you can easily manage multiple GCP projects. What should you?

A. * 1. Create a configuration for each project you need to manage.* 2. Activate the appropriate configuration when you work with each of your assigned GCP
projects.
B. * 1. Create a configuration for each project you need to manage.* 2. Use gcloud init to update the configuration values when you need to work with a non-default
project
C. * 1. Use the default configuration for one project you need to manage.* 2. Activate the appropriate configuration when you work with each of your assigned GCP
projects.
D. * 1. Use the default configuration for one project you need to manage.* 2. Use gcloud init to update the configuration values when you need to work with a non-
default project.

Answer: A

Explanation:
https://cloud.google.com/sdk/gcloud https://cloud.google.com/sdk/docs/configurations#multiple_configurations

NEW QUESTION 221


You are running a data warehouse on BigQuery. A partner company is offering a recommendation engine based on the data in your data warehouse. The partner
company is also running their application on Google Cloud. They manage the resources in their own project, but they need access to the BigQuery dataset in your
project. You want to provide the partner company with access to the dataset What should you do?

A. Create a Service Account in your own project, and grant this Service Account access to BigGuery in your project
B. Create a Service Account in your own project, and ask the partner to grant this Service Account access to BigQuery in their project
C. Ask the partner to create a Service Account in their project, and have them give the Service Account access to BigQuery in their project
D. Ask the partner to create a Service Account in their project, and grant their Service Account access to the BigQuery dataset in your project

Answer: D

Explanation:
https://gtseres.medium.com/using-service-accounts-across-projects-in-gcp-cf9473fef8f0#:~:text=Go%20to%20t

NEW QUESTION 223

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

You are deploying an application to a Compute Engine VM in a managed instance group. The application must be running at all times, but only a single instance of
the VM should run per GCP project. How should you configure the instance group?

A. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 1.
B. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 1.
C. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 2.
D. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 2.

Answer: A

Explanation:
https://cloud.google.com/compute/docs/autoscaler#specifications
Autoscaling works independently from autohealing. If you configure autohealing for your group and an instance fails the health check, the autohealer attempts to
recreate the instance. Recreating an instance can cause the number of instances in the group to fall below the autoscaling threshold (minNumReplicas) that you
specify.
Since we need the application running at all times, we need a minimum 1 instance.
Only a single instance of the VM should run, we need a maximum 1 instance.
We want the application running at all times. If the VM crashes due to any underlying hardware failure, we want another instance to be added to MIG so that
application can continue to serve requests. We can achieve this by enabling autoscaling. The only option that satisfies these three is Set autoscaling to On, set the
minimum number of instances to 1, and then set the maximum number of instances to 1.
Ref: https://cloud.google.com/compute/docs/autoscaler

NEW QUESTION 225


Your continuous integration and delivery (CI/CD) server can't execute Google Cloud actions in a specific project because of permission issues. You need to
validate whether the used service account has the appropriate roles in the specific project. What should you do?

A. Open the Google Cloud console, and run a query to determine which resources this service account can access.
B. Open the Google Cloud console, and run a query of the audit logs to find permission denied errors for this service account.
C. Open the Google Cloud console, and check the organization policies.
D. Open the Google Cloud console, and check the Identity and Access Management (IAM) roles assigned to the service account at the project or inherited from
the folder or organization levels.

Answer: D

Explanation:
This answer is the most effective way to validate whether the service account used by the CI/CD server has the appropriate roles in the specific project. By
checking the IAM roles assigned to the service account, you can see which permissions the service account has and which resources it can access. You can also
check if the service account inherits any roles from the folder or organization levels, which may affect its access to the project. You can use the Google Cloud
console, the gcloud command-line tool, or the IAM API to view the IAM roles of a service account.

NEW QUESTION 229


You recently deployed a new version of an application to App Engine and then discovered a bug in the release. You need to immediately revert to the prior version
of the application. What should you do?

A. Run gcloud app restore.


B. On the App Engine page of the GCP Console, select the application that needs to be reverted and click Revert.
C. On the App Engine Versions page of the GCP Console, route 100% of the traffic to the previous version.
D. Deploy the original version as a separate applicatio
E. Then go to App Engine settings and split traffic between applications so that the original version serves 100% of the requests.

Answer: C

NEW QUESTION 231


You need to immediately change the storage class of an existing Google Cloud bucket. You need to reduce service cost for infrequently accessed files stored in
that bucket and for all files that will be added to that bucket in the future. What should you do?

A. Use the gsutil to rewrite the storage class for the bucket Change the default storage class for the bucket
B. Use the gsutil to rewrite the storage class for the bucket Set up Object Lifecycle management on the bucket
C. Create a new bucket and change the default storage class for the bucket Set up Object Lifecycle management on lite bucket
D. Create a new bucket and change the default storage class for the bucket import the files from the previous bucket into the new bucket

Answer: B

NEW QUESTION 233


You are the team lead of a group of 10 developers. You provided each developer with an individual Google Cloud Project that they can use as their personal
sandbox to experiment with different Google Cloud solutions. You want to be notified if any of the developers are spending above $500 per month on their sandbox
environment. What should you do?

A. Create a single budget for all projects and configure budget alerts on this budget.
B. Create a separate billing account per sandbox project and enable BigQuery billing export
C. Create a Data Studio dashboard to plot the spending per billing account.
D. Create a budget per project and configure budget alerts on all of these budgets.
E. Create a single billing account for all sandbox projects and enable BigQuery billing export
F. Create a Data Studio dashboard to plot the spending per project.

Answer: C

Explanation:

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

Set budgets and budget alerts Overview Avoid surprises on your bill by creating Cloud Billing budgets to monitor all of your Google Cloud charges in one place. A
budget enables you to track your actual Google Cloud spend against your planned spend. After you've set a budget amount, you set budget alert threshold rules
that are used to trigger email notifications. Budget alert emails help you stay informed about how your spend is tracking against your budget. 2. Set budget scope
Set the budget Scope and then click Next. In the Projects field, select one or more projects that you want to apply the budget alert to. To apply the budget alert to
all the projects in the Cloud Billing account, choose Select all.
https://cloud.google.com/billing/docs/how-to/budgets#budget-scop

NEW QUESTION 234


Your management has asked an external auditor to review all the resources in a specific project. The security team has enabled the Organization Policy called
Domain Restricted Sharing on the organization node by specifying only your Cloud Identity domain. You want the auditor to only be able to view, but not modify,
the resources in that project. What should you do?

A. Ask the auditor for their Google account, and give them the Viewer role on the project.
B. Ask the auditor for their Google account, and give them the Security Reviewer role on the project.
C. Create a temporary account for the auditor in Cloud Identity, and give that account the Viewer role on the project.
D. Create a temporary account for the auditor in Cloud Identity, and give that account the Security Reviewer role on the project.

Answer: C

Explanation:
Using primitive roles The following table lists the primitive roles that you can grant to access a project, the description of what the role does, and the permissions
bundled within that role. Avoid using primitive roles except when absolutely necessary. These roles are very powerful, and include a large number of permissions
across all Google Cloud services. For more details on when you should use primitive roles, see the Identity and Access Management FAQ. IAM predefined roles
are much more granular, and allow you to carefully manage the set of permissions that your users have access to. See Understanding Roles for a list of roles that
can be granted at the project level. Creating custom roles can further increase the control you have over user permissions. https://cloud.google.com/resource-
manager/docs/access-control-proj#using_primitive_roles
https://cloud.google.com/iam/docs/understanding-custom-roles

NEW QUESTION 238


You are migrating a business critical application from your local data center into Google Cloud. As part of your high-availability strategy, you want to ensure that
any data used by the application will be immediately available if a zonal failure occurs. What should you do?

A. Store the application data on a zonal persistent dis


B. Create a snapshot schedule for the dis
C. If an outage occurs, create a new disk from the most recent snapshot and attach it to a new VM in another zone.
D. Store the application data on a zonal persistent dis
E. If an outage occurs, create an instance in another zone with this disk attached.
F. Store the application data on a regional persistent dis
G. Create a snapshot schedule for the dis
H. If an outage occurs, create a new disk from the most recent snapshot and attach it to a new VM in another zone.
I. Store the application data on a regional persistent disk If an outage occurs, create an instance in another zone with this disk attached.

Answer: A

NEW QUESTION 240


You want to set up a Google Kubernetes Engine cluster Verifiable node identity and integrity are required for the cluster, and nodes cannot be accessed from the
internet. You want to reduce the operational cost of managing your cluster, and you want to follow Google-recommended practices. What should you do?

A. Deploy a private autopilot cluster


B. Deploy a public autopilot cluster.
C. Deploy a standard public cluster and enable shielded nodes.
D. Deploy a standard private cluster and enable shielded nodes.

Answer: D

NEW QUESTION 242


Your company set up a complex organizational structure on Google Could Platform. The structure includes hundreds of folders and projects. Only a few team
members should be able to view the hierarchical structure. You need to assign minimum permissions to these team members and you want to follow Google-
recommended practices. What should you do?

A. Add the users to roles/browser role.


B. Add the users to roles/iam.roleViewer role.
C. Add the users to a group, and add this group to roles/browser role.
D. Add the users to a group, and add this group to roles/iam.roleViewer role.

Answer: C

Explanation:
We need to apply the GCP Best practices. roles/browser Browser Read access to browse the hierarchy for a project, including the folder, organization, and IAM
policy. This role doesn't include permission to view resources in the project. https://cloud.google.com/iam/docs/understanding-roles

NEW QUESTION 246


Your organization has three existing Google Cloud projects. You need to bill the Marketing department for only their Google Cloud services for a new initiative
within their group. What should you do?

A. * 1. Verify that you ace assigned the Billing Administrator IAM role tor your organization's Google Cloud Project for the Marketing department* 2. Link the new
project to a Marketing Billing Account
B. * 1. Verify that you are assigned the Billing Administrator IAM role for your organization's Google Cloud account* 2. Create a new Google Cloud Project for the

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

Marketing department* 3. Set the default key-value project labels to department marketing for all services in this project
C. * 1. Verify that you are assigned the Organization Administrator IAM role for your organization's Google Cloud account* 2. Create a new Google Cloud Project
for the Marketing department 3. Link the new project to a Marketing Billing Account.
D. * 1. Verity that you are assigned the Organization Administrator IAM role for your organization's Google Cloud account* 2. Create a new Google Cloud Project
for the Marketing department* 3. Set the default key value project labels to department marketing for all services in this protect

Answer: A

NEW QUESTION 250


You are running an application on multiple virtual machines within a managed instance group and have autoscaling enabled. The autoscaling policy is configured
so that additional instances are added to the group if the CPU utilization of instances goes above 80%. VMs are added until the instance group reaches its
maximum limit of five VMs or until CPU utilization of instances lowers to 80%. The initial delay for HTTP health checks against the instances is set to 30 seconds.
The virtual machine instances take around three minutes to become available for users. You observe that when the instance group autoscales, it adds more
instances then necessary to support the levels of end-user traffic. You want to properly maintain instance group sizes when autoscaling. What should you do?

A. Set the maximum number of instances to 1.


B. Decrease the maximum number of instances to 3.
C. Use a TCP health check instead of an HTTP health check.
D. Increase the initial delay of the HTTP health check to 200 seconds.

Answer: D

Explanation:
The reason is that when you do health check, you want the VM to be working. Do the first check after initial setup time of 3 mins = 180 s < 200 s is reasonable.
The reason why our autoscaling is adding more instances than needed is that it checks 30 seconds after launching the instance and at this point, the instance
isnt up and isnt ready to serve traffic. So our autoscaling policy starts another instance again checks this after 30 seconds and the cycle repeats until it gets to the
maximum instances or the instances launched earlier are healthy and start processing
traffic which happens after 180 seconds (3 minutes). This can be easily rectified by adjusting the initial delay to be higher than the time it takes for the instance to
become available for processing traffic.So setting this to 200 ensures that it waits until the instance is up (around 180-second mark) and then starts forwarding
traffic to this instance. Even after a cool out period, if the CPU utilization is still high, the autoscaler can again scale up but this scale-up is genuine and is based on
the actual load.
Initial Delay Seconds This setting delays autohealing from potentially prematurely recreating the instance if the instance is in the process of starting up. The initial
delay timer starts when the currentAction of the
instance is
VERIFYING.Ref: https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs

NEW QUESTION 252


All development (dev) teams in your organization are located in the United States. Each dev team has its own Google Cloud project. You want to restrict access so
that each dev team can only create cloud resources in the United States (US). What should you do?

A. Create a folder to contain all the dev projects Create an organization policy to limit resources in US locations.
B. Create an organization to contain all the dev project
C. Create an Identity and Access Management (IAM) policy to limit the resources in US regions.
D. Create an Identity and Access Management <IAM) policy to restrict the resources locations in the U
E. Apply the policy to all dev projects.
F. Create an Identity and Access Management (IAM)policy to restrict the resources locations in all dev project
G. Apply the policy to all dev roles.

Answer: C

NEW QUESTION 257


......

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Welcome to download the Newest 2passeasy Associate-Cloud-Engineer dumps
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/ (244 New Questions)

THANKS FOR TRYING THE DEMO OF OUR PRODUCT

Visit Our Site to Purchase the Full Set of Actual Associate-Cloud-Engineer Exam Questions With Answers.

We Also Provide Practice Exam Software That Simulates Real Exam Environment And Has Many Self-Assessment Features. Order the
Associate-Cloud-Engineer Product From:

https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/

Money Back Guarantee

Associate-Cloud-Engineer Practice Exam Features:

* Associate-Cloud-Engineer Questions and Answers Updated Frequently

* Associate-Cloud-Engineer Practice Questions Verified by Expert Senior Certified Staff

* Associate-Cloud-Engineer Most Realistic Questions that Guarantee you a Pass on Your FirstTry

* Associate-Cloud-Engineer Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year

Passing Certification Exams Made Easy visit - https://www.2PassEasy.com


Powered by TCPDF (www.tcpdf.org)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy