Associate Cloud Engineer
Associate Cloud Engineer
Associate-Cloud-Engineer
Google Cloud Engineer Associate
QUESTION & ANSWERS
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
QUESTION: 1
You are building a pipeline to process time-series data. Which Google Cloud Platform services should you put
in boxes 1,2,3, and 4?
Correct Answer: D
Explanation/Reference:
analytics/handling-duplicate-data-in-streaming-pipeline-usingpubsub-dataflow https://cloud.google.com/bigtable/docs/schema-
design-time-series
QUESTION: 2
You are about to deploy a new Enterprise Resource Planning (ERP) system on Google Cloud. The application
holds the full database in-memory for fast data access, and you need to configure the most appropriate
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
resources on Google Cloud for this application. What should you do?
Correct Answer: D
Explanation/Reference:
M1 machine series Medium in-memory databases such as SAP HANA Tasks that require intensive use of memory with higher
memory-to-vCPU ratios than the general-purpose high-memory machine types. In-memory databases and in-memory analytics,
business warehousing (BW) workloads, genomics analysis, SQL analysis services. Microsoft SQL Server and similar databases.
https://cloud.google.com/compute/docs/machine-types
QUESTION: 3
Your learn wants to deploy a specific content management system (CMS) solution lo Google Cloud. You need
a quick and easy way to deploy and install the solution. What should you do?
Option A : Search for the CMS solution in Google Cloud Marketplace. Use gcloud CLI to deploy the
solution.
Option B : Search for the CMS solution in Google Cloud Marketplace. Deploy the solution directly from
Cloud Marketplace.
Option C : Search for the CMS solution in Google Cloud Marketplace. Use Terraform and the Cloud
Marketplace ID to deploy the solution with the appropriate parameters.
Option D : Use the installation guide of the CMS provider. Perform the installation through your
configuration management system.
Correct Answer: B
QUESTION: 4
Your preview application, deployed on a single-zone Google Kubernetes Engine (GKE) cluster in us-centrall,
has gained popularity. You are now ready to make the application generally available. You need to deploy
the application to production while ensuring high availability and resilience. You also want to follow
Googlerecommended practices. What should you do?
Option A :
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
Use the gcloud container clusters create command with the options--enable-multi-networking and--
enable- autoscaling to create an autoscaling zonal cluster and deploy the application to it.
Option B :
Use the gcloud container clusters create-auto command to create an autopilot cluster and deploy the
application to it.
Option C :
Use the gcloud container clusters update command with the option—region us-centrall to update the
cluster and deploy the application to it.
Option D : Use the gcloud container clusters update command with the option—node-locations us-
centrall-a,uscentrall-b to update the cluster and deploy the application to the nodes.
Correct Answer: B
QUESTION: 5
You are using Container Registry to centrally store your company’s container images in a separate project. In
another project, you want to create a Google Kubernetes Engine (GKE) cluster. You want to ensure that
Kubernetes can download images from Container Registry. What should you do?
Option A :
In the project where the images are stored, grant the Storage Object Viewer IAM role to the service
account used by the Kubernetes nodes.
Option B :
When you create the GKE cluster, choose the Allow full access to all Cloud APIs option under ‘Access
scopes’.
Option C :
Create a service account, and give it access to Cloud Storage. Create a P12 key for this service account
and use it as an imagePullSecrets in Kubernetes.
Option D :
Configure the ACLs on each image in Cloud Storage to give read-only access to the default Compute
Engine service account.
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
Correct Answer: A
Explanation/Reference:
Configure the ACLs on each image in Cloud Storage to give read-only access to the default Compute Engine service account. is
not right.As mentioned above, Container Registry ignores permissions set on individual objects within the storage bucket so
QUESTION: 6
You have been asked to migrate a docker application from datacenter to cloud. Your solution architect has
suggested uploading docker images to GCR in one project and running an application in a GKE cluster in a
separate project. You want to store images in the project img-278322 and run the application in the project
prod-278986. You want to tag the image as acme_track_n_trace:v1. You want to follow Google-recommended
practices. What should you do?
Correct Answer: B
Explanation/Reference:
This command correctly tags the image as acme_track_n_trace:v1 and uploads the image to the img-278322 project.
Ref:https://cloud.google.com/sdk/gcloud/reference/builds/submit
QUESTION: 7
An employee was terminated, but their access to Google Cloud Platform (GCP) was not removed until 2
weeks later. You need to find out this employee accessed any sensitive customer information after their
termination. What should you do?
Option A : View System Event Logs in Stackdriver. Search for the user’s email as the principal.
Option B : View System Event Logs in Stackdriver. Search for the service account associated with the
user.
Option C : View Data Access audit logs in Stackdriver. Search for the user’s email as the principal.
Option D : View the Admin Activity log in Stackdriver. Search for the service account associated with the
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
user.
Correct Answer: C
Explanation/Reference:
https://cloud.google.com/logging/docs/audit Data Access audit logs Data Access audit logs contain API calls that read the
configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource
data. https://cloud.google.com/logging/docs/audit#data-access
QUESTION: 8
You are a Google Cloud organization administrator. You need to configure organization policies and log sinks
on Google Cloud projects that cannot be removed by project users to comply with your company's security
policies. The security policies are different for each company department Each company department has a
user with the Project Owner role assigned to their projects. What should you do?
Option A :
Organize projects under folders for each department. Configure both organization policies and log sinks
on the folders
Option B :
Organize projects under folders for each department. Configure organization policies on the organization
and log sinks on the folders.
Option C :
Use a standard naming convention for projects that includes the department name. Configure
organization policies on the organization and log sinks on the projects.
Option D :
Use a standard naming convention for projects that includes the department name. Configure both
organization policies and log sinks on the projects.
Correct Answer: A
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
QUESTION: 9
You deployed an LDAP server on Compute Engine that is reachable via TLS through port 636 using UDP.You
want to make sure it is reachable by clients over that port. What should you do?
Option A : Add the network tag allow-udp-636 to the VM instance running the LDAP server.
Option B :
Create a route called allow-udp-636 and set the next hop to be the VM instance running the LDAP server
Option C :
Add a network tag of your choice to the instance. Create a firewall rule to allow ingress on UDP port 636
for that network tag
Option D :
Add a network tag of your choice to the instance running the LDAP server. Create a firewall rule to allow
egress on UDP port 636 for that network tag.
Correct Answer: C
Explanation/Reference:
A tag is simply a character string added to a tags field in a resource, such as Compute Engine virtual machine (VM) instances
or instance templates. A tag is not a separate resource, so you cannot create it separately. All resources with that string are
considered to have that tag. Tags enable you to make firewall rules and routes applicable to specific VM instances.
QUESTION: 10
You have a project for your App Engine application that serves a development environment. The required
testing has succeeded and you want to create a new project to serve as your production environment. What
should you do?
Option A : Use gcloud to create the new project, and then deploy your application to the new project.
Option B : Use gcloud to create the new project and to copy the deployed application to the new project.
Option C :
Create a Deployment Manager configuration file that copies the current App Engine deployment into a
new project.
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
Option D :
Deploy your application again using gcloud and specify the project parameter with the new project name
to create the new project.
Correct Answer: A
Explanation/Reference:
You can deploy to a different project by using –project flag. By default, the service is deployed the current project configured
via: $ gcloud config set core/project PROJECT To override this value for a single deployment, use the –project flag: $ gcloud app
QUESTION: 11
Correct Answer: D
Explanation/Reference:
gcloud deployment-manager deployments create creates deployments based on the configuration file. (Infrastructure as code).
All the configuration related to the artifacts is in the configuration file. This command correctly creates a cluster based on the
Ref:https://cloud.google.com/sdk/gcloud/reference/deployment-manager/deployments/create
QUESTION: 12
Your organization is a financial company that needs to store audit log files for 3 years. Your organization has
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
hundreds of Google Cloud projects. You need to implement a cost-effective approach for log file retention.
What should you do?
Option A : Create an export to the sink that saves logs from Cloud Audit to BigQuery.
Option B : Create an export to the sink that saves logs from Cloud Audit to a Coldline Storage bucket.
Option C : Write a custom script that uses logging API to copy the logs from Stackdriver logs to BigQuery.
Option D : Export these logs to Cloud Pub/Sub and write a Cloud Dataflow pipeline to store logs to Cloud
SQL.
Correct Answer: B
Explanation/Reference:
Coldline Storage is the perfect service to store audit logs from all the projects and is very cost-efficient as well. Coldline
Storage is a very low-cost, highly durable storage service for storing infrequently accessed data
QUESTION: 13
Your managed instance group raised an alert stating that new instance creation has failed to create new
instances. You need to maintain the number of running instances specified by the template to be able to
process expected application traffic. What should you do?
Option A :
Create an instance template that contains valid syntax which will be used by the instance group. Delete
any persistent disks with the same name as instance names.
Option B :
Create an instance template that contains valid syntax that will be used by the instance group. Verify that
the instance name and persistent disk name values are not the same in the template
Option C :
Verify that the instance template being used by the instance group contains valid syntax. Delete any
persistent disks with the same name as instance names. Set the disks.autoDelete property to true in the
instance template
Option D :
Delete the current instance template and replace it with a new instance template. Verify that the instance
name and persistent disk name values are not the same in the template. Set the disks.autoDelete
property to true in the instance template.
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
Correct Answer: A
Explanation/Reference:
https://cloud.google.com/compute/docs/troubleshooting/troubleshooting-migs https://cloud.google.com/compute/docs/instance-
templates#how_to_update_instance_templates
QUESTION: 14
You have an application on a general-purpose Compute Engine instance that is experiencing excessive disk
read throttling on its Zonal SSD Persistent Disk. The application primarily reads large files from disk. The disk
size is currently 350 GB. You want to provide the maximum amount of throughput while minimizing costs.
What should you do?
Correct Answer: C
Explanation/Reference:
Standard persistent disks are efficient and economical for handling sequential read/write operations, but they aren't optimized
to handle high rates of random input/output operations per second (IOPS). If your apps require high rates of random IOPS, use
SSD persistent disks. SSD persistent disks are designed for singledigit millisecond latencies. Observed latency is application
specific. Reference: https://cloud.google.com/compute/docs/disks/performance Local SSDs Local SSDs are physically attached
to the server that hosts your VM instance. Local SSDs have higher throughput and lower latency than standard persistent disks
or SSD persistent disks. The data that you store on a local SSD persists only until the instance is stopped or deleted. Each local
SSD is 375 GB in size, but you can attach a maximum of 24 local SSD partitions for a total of 9 TB per instance. Performance
Local SSDs are designed to offer very high IOPS and low latency. Unlike persistent disks, you must manage the striping on local
SSDs yourself. Combine multiple local SSD partitions into a single logical volume to achieve the best local SSD performance per
instance, or format local SSD partitions individually. Local SSD performance depends on which interface you select. Local SSDs
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
QUESTION: 15
You are assigned to maintain a Google Kubernetes Engine (GKE) cluster named dev that was deployed on
Google Cloud. You want to manage the GKE configuration using the command line interface (CLI). You have
just downloaded and installed the Cloud SDK. You want to ensure that future CLI commands by default
address this specific cluster. What should you do?
Correct Answer: A
Explanation/Reference:
To set a default cluster for gcloud commands, run the following command: gcloud config set container/cluster CLUSTER_NAME
https://cloud.google.com/kubernetes-engine/docs/how-to/managing-clusters?hl=en
QUESTION: 16
You have a Compute Engine instance hosting an application used between 9 AM and 6 PM on weekdays. You
want to back up this instance daily for disaster recovery purposes. You want to keep the backups for 30
days. You want the Google-recommended solution with the least management overhead and the least
number of services. What should you do?
Option A :
1. Update your instances’ metadata to add the following value: snapshot–schedule: 0 1 * * *
2. Update your instances’ metadata to add the following value: snapshot–retention: 30
Option B :
1. In the Cloud Console, go to the Compute Engine Disks page and select your instance’s disk.
2. In the Snapshot Schedule section, select Create Schedule and configure the following parameters:
–Schedule frequency: Daily
–Start time: 1:00 AM – 2:00 AM
–Autodelete snapshots after 30 days
Option C :
1. Create a Cloud Function that creates a snapshot of your instance’s disk.
2.Create a Cloud Function that deletes snapshots that are older than 30 days.
3.Use Cloud Scheduler to trigger both Cloud Functions daily at 1:00 AM.
Option D :
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
1. Create a bash script in the instance that copies the content of the disk to Cloud Storage.
2.Create a bash script in the instance that deletes data older than 30 days in the backup Cloud Storage
bucket.
3.Configure the instance’s crontab to execute these scripts daily at 1:00 AM.
Correct Answer: B
Explanation/Reference:
Creating scheduled snapshots for persistent disk This document describes how to create a snapshot schedule to regularly and
automatically back up your zonal and regional persistent disks. Use snapshot schedules as a best practice to back up your
Compute Engine workloads. After creating a snapshot schedule, you can apply it to one or more persistent disks.
https://cloud.google.com/compute/docs/disks/scheduled-snapshots
QUESTION: 17
Your existing application running in Google Kubernetes Engine (GKE) consists of multiple pods running on
four GKE n1–standard–2 nodes. You need to deploy additional pods requiring n2–highmem–16 nodes without
any downtime. What should you do?
Option A : Use gcloud container clusters upgrade. Deploy the new services.
Option B : Create a new Node Pool and specify machine type n2–highmem–16. Deploy the new pods.
Option C : Create a new cluster with n2–highmem–16 nodes. Redeploy the pods and delete the old
cluster.
Option D :
Create a new cluster with both n1–standard–2 and n2–highmem–16 nodes. Redeploy the pods and delete
the old cluster.
Correct Answer: B
Explanation/Reference:
https://cloud.google.com/kubernetes-engine/docs/concepts/deployment
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
QUESTION: 18
Your company is using Google Workspace to manage employee accounts. Anticipated growth will increase
the number of personnel from 100 employees to 1.000 employees within 2 years. Most employees will need
access to your company's Google Cloud account. The systems and processes will need to support 10x growth
without performance degradation, unnecessary complexity, or security issues. What should you do?
Option A :
Migrate the users to Active Directory. Connect the Human Resources system to Active Directory. Turn on
Google Cloud Directory Sync (GCDS) for Cloud Identity. Turn on Identity Federation from Cloud Identity to
Active Directory.
Option B : Organize the users in Cloud Identity into groups. Enforce multi-factor authentication in Cloud
Identity.
Option C :
Turn on identity federation between Cloud Identity and Google Workspace. Enforce multi-factor
authentication for domain wide delegation.
Option D :
Use a third-party identity provider service through federation. Synchronize the users from Google
Workplace to the third-party provider in real time.
Correct Answer: B
QUESTION: 19
Your company runs one batch process in an on-premises server that takes around 30 hours to complete. The
task runs monthly, can be performed offline, and must be restarted if interrupted. You want to migrate this
workload to the cloud while minimizing cost. What should you do?
Create an Instance Template with Preemptible VMs On. Create a Managed Instance Group from the
template and adjust Target CPU Utilization. Migrate the workload.
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
Correct Answer: D
Explanation/Reference:
Install the workload in a compute engine VM, start and stop the instance as needed, because as per the question the VM runs
for 30 hours, process can be performed offline and should not be interrupted, if interrupted we need to restart the batch
process again. Preemptible VMs are cheaper, but they will not be available beyond 24hrs, and if the process gets interrupted
QUESTION: 20
You installed the Google Cloud CLI on your workstation and set the proxy configuration. However, you are
worried that your proxy credentials will be recorded in the gcloud CLI logs. You want to prevent your proxy
credentials from being logged What should you do?
Option A :
Configure username and password by using gcloud configure set proxy/username and gcloud configure
set proxy/ proxy/password commands.
Option B :
Encode username and password in sha256 encoding, and save it to a text file. Use filename as a value in
the gcloud configure set core/custom_ca_certs_file command.
Option C :
Provide values for CLOUDSDK_USERNAME and CLOUDSDK_PASSWORD in the gcloud CLI tool configure
file.
Option D :
Correct Answer: D
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
QUESTION: 21
You are building a multi-player gaming application that will store game information in a database. As the
popularity of the application increases, you are concerned about delivering consistent performance. You
need to ensure an optimal gaming performance for global users, without increasing the management
complexity. What should you do?
Option A : Use Cloud SQL database with cross-region replication to store game statistics in the EU, US,
and APAC regions.
Option B : Use Cloud Spanner to store user data mapped to the game statistics.
Option C : Use BigQuery to store game statistics with a Redis on Memorystore instance in the front to
provide global consistency.
Option D : Store game statistics in a Bigtable database partitioned by username.
Correct Answer: B
QUESTION: 22
You have an object in a Cloud Storage bucket that you want to share with an external company. The
objectcontains sensitive data. You want access to the content to be removed after four hours. The external
companydoes not have a Google account to which you can grant specific user-based access privileges. You
want to usethe most secure method that requires the fewest steps. What should you do?
Option A : Create a signed URL with a four-hour expiration and share the URL with the company.
Option B : Set object access to ‘public’ and use object lifecycle management to remove the object after
four hours.
Option C :
Configure the storage bucket as a static website and furnish the object’s URL to the company. Delete the
object from the storage bucket after four hours.
Option D :
Create a new Cloud Storage bucket specifically for the external company to access. Copy the object to
that bucket. Delete the bucket after four hours have passed.
Correct Answer: A
Explanation/Reference:
Signed URLs are used to give time-limited resource access to anyone in possession of the URL, regardless of whether they have
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
QUESTION: 23
You are using Data Studio to visualize a table from your data warehouse that is built on top of BigQuery.
Data is appended to the data warehouse during the day. At night, the daily summary is recalculated by
overwriting the table. You just noticed that the charts in Data Studio are broken, and you want to analyze the
problem. What should you do?
Option A : Use the BigQuery interface to review the nightly Job and look for any errors
Option B : Review the Error Reporting page in the Cloud Console to find any errors.
Option C : In Cloud Logging create a filter for your Data Studio report
Option D : Use the open source CLI tool. Snapshot Debugger, to find out why the data was not refreshed
correctly.
Correct Answer: D
Explanation/Reference:
Cloud Debugger helps inspect the state of an application, at any code location, without stopping or slowing down the running
app // https://cloud.google.com/debugger/docs
QUESTION: 24
You are creating a Google Kubernetes Engine (GKE) cluster with a cluster autoscaler feature enabled. You
need to make sure that each node of the cluster will run a monitoring pod that sends container metrics to a
third-party monitoring solution. What should you do?
Correct Answer: B
Explanation/Reference:
https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset
https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset#usage_patterns
DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
nodes to a node pool, DaemonSets automatically add Pods to the new nodes as needed. In GKE, DaemonSets manage groups
of replicated Pods and adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add
nodes to a node pool, DaemonSets automatically add Pods to the new nodes as needed. So, this is a perfect fit for our
deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention.
Examples of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons
like collectd. For example, you could have DaemonSets for each type of daemon run on all of your nodes. Alternatively, you
could run multiple DaemonSets for a single type of daemon, but have them use different configurations for different hardware
QUESTION: 25
You have successfully created a development environment in a project for an application. This
applicationuses Compute Engine and Cloud SQL. Now, you need to create a production environment for this
application.The security team has forbidden the existence of network routes between these 2 environments,
and asks youto follow Google-recommended practices. What should you do?
Option A :
Create a new project, enable the Compute Engine and Cloud SQL APIs in that project, and replicate the
setup you have created in the development environment.
Option B :
Create a new production subnet in the existing VPC and a new production Cloud SQL instance in your
existing project, and deploy your application using those resources
Option C :
Create a new project, modify your existing VPC to be a Shared VPC, share that VPC with your new project,
and replicate the setup you have in the development environment in that new project, in the Shared VPC.
Option D :
Ask the security team to grant you the Project Editor role in an existing production project used by
another division of your company. Once they grant you that role, replicate the setup you have in the
development environment in that project.
Correct Answer: A
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
Explanation/Reference:
This aligns with Googles recommended practices. By creating a new project, we achieve complete isolation between
development and production environments; as well as isolate this production application from production applications of other
QUESTION: 26
Your company is running a critical workload on a single Compute Engine VM instance. Your company's
disaster recovery policies require you to backup the entire instance's disk data every day. The backups must
be retained for 7 days. You must configure a backup solution that complies with your company's security
policies and requires minimal setup and configuration. What should you do?
Configure Cloud Scheduler to trigger a Cloud Function each day that creates a new machine image and
deletes machine images that are older than 7 days.
Option D :
Configure a bash script using gsutil to run daily through a cron job. Copy the disk's files to a Cloud
Storage bucket with archive storage class and an object lifecycle rule to delete the objects after 7 days.
Correct Answer: B
QUESTION: 27
Your company uses a large number of Google Cloud services centralized in a single project. All teams have
specific projects for testing and development. The DevOps team needs access to all of the production
services in order to perform their job. You want to prevent Google Cloud product changes from broadening
their permissions in the future. You want to follow Google-recommended practices. What should you do?
Option A : Grant all members of the DevOps team the role of Project Editor on the organization level.
Option B : Grant all members of the DevOps team the role of Project Editor on the production project.
Option C :
Create a custom role that combines the required permissions. Grant the DevOps team the custom role on
the production project.
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
Option D :
Create a custom role that combines the required permissions. Grant the DevOps team the custom role on
the organization level.
Correct Answer: C
Explanation/Reference:
Understanding IAM custom roles Key Point: Custom roles enable you to enforce the principle of least privilege, ensuring that
the user and service accounts in your organization have only the permissions essential to performing their intended functions.
Basic concepts Custom roles are user-defined, and allow you to bundle one or more supported permissions to meet your
specific needs. Custom roles are not maintained by Google; when new permissions, features, or services are added to Google
Cloud, your custom roles will not be updated automatically. When you create a custom role, you must choose an organization
or project to create it in. You can then grant the custom role on the organization or project, as well as any resources within that
QUESTION: 28
You have a large 5-TB AVRO file stored in a Cloud Storage bucket. Your analysts are proficient only in SQL
and need access to the data stored in this file. You want to find a cost-effective way to complete their
request as soon as possible. What should you do?
Option A : Load data in Cloud Datastore and run a SQL query against it.
Option B : Create a BigQuery table and load data in BigQuery. Run a SQL query on this table and drop this
table after you complete your request.
Option C :
Create external tables in BigQuery that point to Cloud Storage buckets and run a SQL query on these
external tables to complete your request
Option D :
Create a Hadoop cluster and copy the AVRO file to NDFS by compressing it. Load the file in a hive table
and provide access to your analysts so that they can run SQL queries.
Correct Answer: C
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
Explanation/Reference:
https://cloud.google.com/bigquery/external-data-sources An external data source is a data source that you can query directly
from BigQuery, even though the data is not stored in BigQuery storage. BigQuery supports the following external data sources:
Amazon S3 Azure Storage Cloud Bigtable Cloud Spanner Cloud SQL Cloud Storage Drive
QUESTION: 29
You are building an application that will run in your data center. The application will use Google Cloud
Platform (GCP) services like AutoML. You created a service account that has appropriate access to AutoML.
You need to enable authentication to the APIs from your on-premises environment. What should you do?
Set up direct interconnect between your data center and Google Cloud Platform to enable authentication
for your on-premises applications.
Option D :
Go to the IAM & admin console, grant a user account permissions similar to the service account
permissions, and use this user account for authentication from your data center.
Correct Answer: B
Explanation/Reference:
such as on other platforms or on-premises, you must first establish the identity of the service account. Public/private key pairs
provide a secure way of accomplishing this goal. You can create a service account key using the Cloud Console, the gcloud
tool, the serviceAccounts. keys.create() method, or one of the client libraries. Ref: https://cloud.google.com/iam/docs/creating-
managing-service-account-keys
QUESTION: 30
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
You have a Compute Engine instance hosting a production application. You want to receive an email if the
instance consumes more than 90% of its CPU resources for more than 15 minutes. You want to use Google
services. What should you do?
1. Create a Stackdriver Workspace, and associate your Google Cloud Platform (GCP) project with it.
2.Create an Alerting Policy in Stackdriver that uses the threshold as a trigger condition.3.Configure your
email address in the notification channe
Option C : 1. Create a Stackdriver Workspace, and associate your GCP project with it.2.Write a script that
monitors the CPU usage and sends it as a custom metric to Stackdriver.<3.Create an uptime check for
the instance in Stackdriver.
Option D : 1. In Stackdriver Logging, create a logs-based metric to extract the CPU usage by using this
regularexpression: CPU Usage: ([0-9] {1,3})%2.In Stackdriver Monitoring, create an Alerting Policy based
on this metric.3.Configure your email address in the notification channel.
Correct Answer: B
Explanation/Reference:
Specifying conditions for alerting policies This page describes how to specify conditions for alerting policies. The conditions for
an alerting policy define what is monitored and when to trigger an alert. For example, suppose you want to define an alerting
policy that emails you if the CPU utilization of a Compute Engine VM instance is above 80% for more than 3 minutes. You use
the conditions dialog to specify that you want to monitor the CPU utilization of a Compute Engine VM instance, and that you
want an alerting policy to trigger when that utilization is above 80% for 3 minutes.
https://cloud.google.com/monitoring/alerts/ui-conditions-ga https://cloud.google.com/monitoring/alerts/using-alerting-ui
https://cloud.google.com/monitoring/support /notification-options
QUESTION: 31
You have a number of compute instances belonging to an unmanaged instances group. You need to SSH to
one of the Compute Engine instances to run an ad hoc script. You've already authenticated gcloud, however,
you don't have an SSH key deployed yet. In the fewest steps possible, what's the easiest way to SSH to the
instance?
Option A : Run gcloud compute instances list to get the IP address of the instance, then use the ssh
command.
Option B : Use the gcloud compute ssh command.
Option C : Create a key with the ssh-keygen command. Then use the gcloud compute ssh command.
Option D : Create a key with the ssh-keygen command. Upload the key to the instance. Run gcloud
compute instances list to get the IP address of the instance, then use the ssh command.
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
Correct Answer: B
Explanation/Reference:
gcloud compute ssh ensures that the user's public SSH key is present in the project's metadata. If the user does not have a
public SSH key, one is generated using ssh-keygen and added to the project's metadata. This is similar to the other option
where we copy the key explicitly to the project's metadata but here it is done automatically for us. There are also security
benefits with this approach. When we use gcloud compute ssh to connect to Linux instances, we are adding a layer of security
by storing your host keys as guest attributes. Storing SSH host keys as guest attributes improve the security of your
connections by helping to protect against vulnerabilities such as man-in-the-middle (MITM) attacks. On the initial boot of a VM
instance, if guest attributes are enabled, Compute Engine stores your generated host keys as guest attributes.
Compute Engine then uses these host keys that were stored during the initial boot to verify all subsequent connections to the
VM instance.
Ref:https://cloud.google.com/compute/docs/instances/connecting-to-instance
Ref:https://cloud.google.com/sdk/gcloud/reference/compute/ssh
QUESTION: 32
You are developing a new application and are looking for a Jenkins installation to build and deploy your
source code. You want to automate the installation as quickly and easily as possible. What should you do?
Create an instance template with the Jenkins executable. Create a managed instance group with this
template.
Correct Answer: A
Explanation/Reference:
Installing Jenkins In this section, you use Cloud Marketplace to provision a Jenkins instance. You customize this instance to use
the agent image you created in the previous section. Go to the Cloud Marketplace solution for Jenkins. Click Launch on
Compute Engine. Change the Machine Type field to 4 vCPUs 15 GB Memory, n1-standard-4. Machine type selection for Jenkins
deployment. Click Deploy and wait for your Jenkins instance to finish being provisioned. When it is finished, you will see:
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html
engine#installing_jenkins
https://www.dumpschief.com/Associate-Cloud-Engineer-pdf-download.html