Associate Cloud Engineer - 0
Associate Cloud Engineer - 0
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/
NEW QUESTION 1
Your company has a single sign-on (SSO) identity provider that supports Security Assertion Markup Language (SAML) integration with service providers. Your
company has users in Cloud Identity. You would like users to authenticate using your company’s SSO provider. What should you do?
A. In Cloud Identity, set up SSO with Google as an identity provider to access custom SAML apps.
B. In Cloud Identity, set up SSO with a third-party identity provider with Google as a service provider.
C. Obtain OAuth 2.0 credentials, configure the user consent screen, and set up OAuth 2.0 for Mobile & Desktop Apps.
D. Obtain OAuth 2.0 credentials, configure the user consent screen, and set up OAuth 2.0 for Web Server Applications.
Answer: B
Explanation:
https://support.google.com/cloudidentity/answer/6262987?hl=en&ref_topic=7558767
NEW QUESTION 2
You are developing a new application and are looking for a Jenkins installation to build and deploy your source code. You want to automate the installation as
quickly and easily as possible. What should you do?
Answer: A
Explanation:
Installing Jenkins
In this section, you use Cloud Marketplace to provision a Jenkins instance. You customize this instance to use the agent image you created in the previous
section.
Go to the Cloud Marketplace solution for Jenkins. Click Launch on Compute Engine.
Change the Machine Type field to 4 vCPUs 15 GB Memory, n1-standard-4.
Machine type selection for Jenkins deployment.
Click Deploy and wait for your Jenkins instance to finish being provisioned. When it is finished, you will see: Jenkins has been deployed.
https://cloud.google.com/solutions/using-jenkins-for-distributed-builds-on-compute-engine#installing_jenkins
NEW QUESTION 3
Every employee of your company has a Google account. Your operational team needs to manage a large number of instances on Compute Engine. Each member
of this team needs only administrative access to the servers. Your security team wants to ensure that the deployment of credentials is operationally efficient and
must be able to determine who accessed a given instance. What should you do?
Answer: C
Explanation:
https://cloud.google.com/compute/docs/instances/managing-instance-access
NEW QUESTION 4
Your organization has a dedicated person who creates and manages all service accounts for Google Cloud projects. You need to assign this person the minimum
role for projects. What should you do?
Answer: D
NEW QUESTION 5
You want to add a new auditor to a Google Cloud Platform project. The auditor should be allowed to read, but not modify, all project items.
How should you configure the auditor's permissions?
Answer: C
NEW QUESTION 6
Your company's security vulnerability management policy wonts 3 member of the security team to have visibility into vulnerabilities and other OS metadata for a
specific Compute Engine instance This Compute Engine instance hosts a critical application in your Goggle Cloud project. You need to implement your company's
security vulnerability management policy. What should you dc?
A. • Ensure that the Ops Agent Is Installed on the Compute Engine instance.• Create a custom metric in the Cloud Monitoring dashboard.• Provide the security
team member with access to this dashboard.
B. • Ensure that the Ops Agent is installed on tie Compute Engine instance.• Provide the security team member roles/configure.inventoryViewer permission.
C. • Ensure that the OS Config agent Is Installed on the Compute Engine instance.• Provide the security team member roles/configure.vulnerabilityViewer
permission.
D. • Ensure that the OS Config agent is installed on the Compute Engine instance• Create a log sink Co a BigQuery dataset.• Provide the security team member
with access to this dataset.
Answer: C
NEW QUESTION 7
You have developed an application that consists of multiple microservices, with each microservice packaged in its own Docker container image. You want to
deploy the entire application on Google Kubernetes Engine so
that each microservice can be scaled individually. What should you do?
Answer: A
NEW QUESTION 8
You are managing a Data Warehouse on BigQuery. An external auditor will review your company's processes, and multiple external consultants will need view
access to the data. You need to provide them with view access while following Google-recommended practices. What should you do?
Answer: D
NEW QUESTION 9
You have an object in a Cloud Storage bucket that you want to share with an external company. The object contains sensitive data. You want access to the
content to be removed after four hours. The external company does not have a Google account to which you can grant specific user-based access privileges. You
want to use the most secure method that requires the fewest steps. What should you do?
A. Create a signed URL with a four-hour expiration and share the URL with the company.
B. Set object access to ‘public’ and use object lifecycle management to remove the object after four hours.
C. Configure the storage bucket as a static website and furnish the object’s URL to the compan
D. Delete the object from the storage bucket after four hours.
E. Create a new Cloud Storage bucket specifically for the external company to acces
F. Copy the object to that bucke
G. Delete the bucket after four hours have passed.
Answer: A
Explanation:
Signed URLs are used to give time-limited resource access to anyone in possession of the URL, regardless of whether they have a Google account.
https://cloud.google.com/storage/docs/access-control/signed-urls
NEW QUESTION 10
You have files in a Cloud Storage bucket that you need to share with your suppliers. You want to restrict the time that the files are available to your suppliers to 1
hour. You want to follow Google recommended practices. What should you do?
A. Create a service account with just the permissions to access files in the bucke
B. Create a JSON key for the service accoun
C. Execute the command gsutil signurl -m 1h gs:///*.
D. Create a service account with just the permissions to access files in the bucke
E. Create a JSON key for the service accoun
F. Execute the command gsutil signurl -d 1h gs:///**.
G. Create a service account with just the permissions to access files in the bucke
H. Create a JSON key for the service accoun
Answer: B
Explanation:
This command correctly specifies the duration that the signed url should be valid for by using the -d flag. The default is 1 hour so omitting the -d flag would have
also resulted in the same outcome. Times may be specified with no suffix (default hours), or with s = seconds, m = minutes, h = hours, d = days. The max duration
allowed is 7d.Ref: https://cloud.google.com/storage/docs/gsutil/commands/signurl
NEW QUESTION 10
You need to set up permissions for a set of Compute Engine instances to enable them to write data into a particular Cloud Storage bucket. You want to follow
Google-recommended practices. What should you do?
Answer: C
Explanation:
https://cloud.google.com/iam/docs/understanding-service-accounts#using_service_accounts_with_compute_eng https://cloud.google.com/storage/docs/access-
control/iam-roles
NEW QUESTION 11
You installed the Google Cloud CLI on your workstation and set the proxy configuration. However, you are worried that your proxy credentials will be recorded in
the gcloud CLI logs. You want to prevent your proxy credentials from being logged What should you do?
A. Configure username and password by using gcloud configure set proxy/username and gcloud configure set proxy/ proxy/password commands.
B. Encode username and password in sha256 encoding, and save it to a text fil
C. Use filename as a value in the gcloud configure set core/custom_ca_certs_file command.
D. Provide values for CLOUDSDK_USERNAME and CLOUDSDK_PASSWORD in the gcloud CLI tool configure file.
E. Set the CLOUDSDK_PROXY_USERNAME and CLOUDSDK_PROXY PASSWORD properties by using environment variables in your command line tool.
Answer: D
NEW QUESTION 13
You’ve deployed a microservice called myapp1 to a Google Kubernetes Engine cluster using the YAML file specified below:
You need to refactor this configuration so that the database password is not stored in plain text. You want to follow Google-recommended practices. What should
you do?
A. Store the database password inside the Docker image of the container, not in the YAML file.
B. Store the database password inside a Secret objec
C. Modify the YAML file to populate the DB_PASSWORD environment variable from the Secret.
D. Store the database password inside a ConfigMap objec
E. Modify the YAML file to populate the DB_PASSWORD environment variable from the ConfigMap.
F. Store the database password in a file inside a Kubernetes persistent volume, and use a persistent volume claim to mount the volume to the container.
Answer: B
Explanation:
https://cloud.google.com/config-connector/docs/how-to/secrets#gcloud
NEW QUESTION 15
Your team is running an on-premises ecommerce application. The application contains a complex set of microservices written in Python, and each microservice is
running on Docker containers. Configurations are injected by using environment variables. You need to deploy your current application to a serverless Google
Cloud cloud solution. What should you do?
A. Use your existing CI/CD pipeline Use the generated Docker images and deploy them to Cloud Run.Update the configurations and the required endpoints.
B. Use your existing continuous integration and delivery (CI/CD) pipelin
C. Use the generated Docker images and deploy them to Cloud Functio
D. Use the same configuration as on-premises.
E. Use the existing codebase and deploy each service as a separate Cloud Function Update the configurations and the required endpoints.
F. Use your existing codebase and deploy each service as a separate Cloud Run Use the same configurations as on-premises.
Answer: A
NEW QUESTION 20
You are building an application that will run in your data center. The application will use Google Cloud Platform (GCP) services like AutoML. You created a service
account that has appropriate access to AutoML. You need to enable authentication to the APIs from your on-premises environment. What should you do?
Answer: B
NEW QUESTION 24
You need to create a copy of a custom Compute Engine virtual machine (VM) to facilitate an expected increase in application traffic due to a business acquisition.
What should you do?
Answer: D
Explanation:
A custom image belongs only to your project. To create an instance with a custom image, you must first have a custom image.
NEW QUESTION 29
Your company uses Cloud Storage to store application backup files for disaster recovery purposes. You want to follow Google’s recommended practices. Which
storage option should you use?
A. Multi-Regional Storage
B. Regional Storage
C. Nearline Storage
D. Coldline Storage
Answer: D
NEW QUESTION 33
You are in charge of provisioning access for all Google Cloud users in your organization. Your company recently acquired a startup company that has their own
Google Cloud organization. You need to ensure that your Site Reliability Engineers (SREs) have the same project permissions in the startup company's
organization as in your own organization. What should you do?
A. In the Google Cloud console for your organization, select Create role from selection, and choose destination as the startup company's organization
B. In the Google Cloud console for the startup company, select Create role from selection and choose source as the startup company's Google Cloud organization.
C. Use the gcloud iam roles copy command, and provide the Organization ID of the startup company's Google Cloud Organization as the destination.
D. Use the gcloud iam roles copy command, and provide the project IDs of all projects in the startup company s organization as the destination.
Answer: C
Explanation:
https://cloud.google.com/architecture/best-practices-vpc-design#shared-service Cloud VPN is another alternative. Because Cloud VPN establishes reachability
through managed IPsec tunnels, it doesn't have the aggregate limits of VPC Network Peering. Cloud VPN uses a VPN Gateway for connectivity and doesn't
consider the aggregate resource use of the IPsec peer. The drawbacks of Cloud VPN include increased costs (VPN tunnels and traffic egress), management
overhead required to maintain tunnels, and the performance overhead of IPsec.
NEW QUESTION 34
You are setting up a Windows VM on Compute Engine and want to make sure you can log in to the VM via RDP. What should you do?
A. After the VM has been created, use your Google Account credentials to log in into the VM.
B. After the VM has been created, use gcloud compute reset-windows-password to retrieve the login credentials for the VM.
C. When creating the VM, add metadata to the instance using ‘windows-password’ as the key and a password as the value.
D. After the VM has been created, download the JSON private key for the default Compute Engine service accoun
E. Use the credentials in the JSON file to log in to the VM.
Answer: B
Explanation:
You can generate Windows passwords using either the Google Cloud Console or the gcloud command-line tool. This option uses the right syntax to reset the
windows password.
gcloud compute reset-windows-password windows-instance
Ref: https://cloud.google.com/compute/docs/instances/windows/creating-passwords-for-windows-instances#gc
NEW QUESTION 38
Your managed instance group raised an alert stating that new instance creation has failed to create new instances. You need to maintain the number of running
instances specified by the template to be able to process expected application traffic. What should you do?
A. Create an instance template that contains valid syntax which will be used by the instance grou
B. Delete any persistent disks with the same name as instance names.
C. Create an instance template that contains valid syntax that will be used by the instance grou
D. Verify that the instance name and persistent disk name values are not the same in the template.
E. Verify that the instance template being used by the instance group contains valid synta
F. Delete any persistent disks with the same name as instance name
G. Set the disks.autoDelete property to true in the instance template.
H. Delete the current instance template and replace it with a new instance templat
I. Verify that the instance name and persistent disk name values are not the same in the templat
J. Set the disks.autoDelete property to true in the instance template.
Answer: A
Explanation:
https://cloud.google.com/compute/docs/troubleshooting/troubleshooting-migs https://cloud.google.com/compute/docs/instance-
templates#how_to_update_instance_templates
NEW QUESTION 39
You have deployed an application on a Compute Engine instance. An external consultant needs to access the Linux-based instance. The consultant is connected
to your corporate network through a VPN connection, but the consultant has no Google account. What should you do?
A. Instruct the external consultant to use the gcloud compute ssh command line tool by using Identity-Aware Proxy to access the instance.
B. Instruct the external consultant to use the gcloud compute ssh command line tool by using the public IP address of the instance to access it.
C. Instruct the external consultant to generate an SSH key pair, and request the public key from the consultant.Add the public key to the instance yourself, and
have the consultant access the instance through SSH with their private key.
D. Instruct the external consultant to generate an SSH key pair, and request the private key from the consultant.Add the private key to the instance yourself, and
have the consultant access the instance through SSH with their public key.
Answer: C
Explanation:
The best option is to instruct the external consultant to generate an SSH key pair, and request the public key from the consultant. Then, add the public key to the
instance yourself, and have the consultant access the instance through SSH with their private key. This way, you can grant the consultant access to the instance
without requiring a Google account or exposing the instance’s public IP address. This option also follows the best practice of using user-managed SSH keys
instead of service account keys for SSH access1.
Option A is not feasible because the external consultant does not have a Google account, and therefore cannot use Identity-Aware Proxy (IAP) to access the
instance. IAP requires the user to authenticate with a Google account and have the appropriate IAM permissions to access the instance2. Option B is not secure
because it exposes the instance’s public IP address, which can increase the risk of unauthorized access or attacks. Option D is not correct because it reverses the
roles of the public and private keys. The public key should be added to the instance, and the private key should be kept by the consultant. Sharing the private key
with anyone else can compromise the security of the SSH connection3.
References:
1: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys
2: https://cloud.google.com/iap/docs/using-tcp-forwarding
3: https://cloud.google.com/compute/docs/instances/connecting-advanced#sshbetweeninstances
NEW QUESTION 44
The core business of your company is to rent out construction equipment at a large scale. All the equipment that is being rented out has been equipped with
multiple sensors that send event information every few seconds. These signals can vary from engine status, distance traveled, fuel level, and more. Customers are
billed based on the consumption monitored by these sensors. You expect high throughput – up to thousands of events per hour per device – and need to retrieve
consistent data based on the time of the event. Storing and retrieving individual signals should be atomic. What should you do?
A. Create a file in Cloud Storage per device and append new data to that file.
B. Create a file in Cloud Filestore per device and append new data to that file.
C. Ingest the data into Datastor
D. Store data in an entity group based on the device.
E. Ingest the data into Cloud Bigtabl
F. Create a row key based on the event timestamp.
Answer: D
Explanation:
Keyword need to look for
- "High Throughput",
- "Consistent",
- "Property based data insert/fetch like ngine status, distance traveled, fuel level, and more." which can be designed in column,
- "Large Scale Customer Base + Each Customer has multiple sensor which send event in seconds" This will go for pera bytes situation,
- Export data based on the time of the event.
- Atomic
o BigTable will fit all requirement. o DataStore is not fully Atomic
o CloudStorage is not a option where we can export data based on time of event. We need another solution to do that
o FireStore can be used with MobileSDK.
NEW QUESTION 46
You used the gcloud container clusters command to create two Google Cloud Kubernetes (GKE) clusters prod-cluster and dev-cluster.
• prod-cluster is a standard cluster.
• dev-cluster is an auto-pilot duster.
When you run the Kubect1 get nodes command, you only see the nodes from prod-cluster Which commands should you run to check the node status for dev-
cluster?
A.
B.
C.
D.
Answer: C
NEW QUESTION 51
You have an application running in Google Kubernetes Engine (GKE) with cluster autoscaling enabled. The application exposes a TCP endpoint. There are
several replicas of this application. You have a Compute Engine instance in the same region, but in another Virtual Private Cloud (VPC), called gce-network, that
has no overlapping IP ranges with the first VPC. This instance needs to connect to the application on GKE. You want to minimize effort. What should you do?
A. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.2. Set the service's externalTrafficPolicy to Cluster.3. Configure
the Compute Engine instance to use the address of the load balancer that has been created.
B. 1. In GKE, create a Service of type NodePort that uses the application's Pods as backend.2. Create a Compute Engine instance called proxy with 2 network
interfaces, one in each VPC.3. Use iptables on this instance to forward traffic from gce-network to the GKE nodes.4. Configure the Compute Engine instance to
use the address of proxy in gce-network as endpoint.
C. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.2. Add an annotation to this service: cloud.google.com/load-
balancer-type: Internal3. Peer the two VPCs together.4. Configure the Compute Engine instance to use the address of the load balancer that has been created.
D. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.2. Add a Cloud Armor Security Policy to the load balancer that
whitelists the internal IPs of the MIG'sinstances.3. Configure the Compute Engine instance to use the address of the load balancer that has been created.
Answer: A
Explanation:
performs a peering between the two VPC's (the statement makes sure that this option is feasible since it clearly specifies that there is no overlapping between the
ip ranges of both vpc's), deploy the LoadBalancer as internal with the annotation, and configure the endpoint so that the compute engine instance can access the
application internally, that is, without the need to have a public ip at any time and therefore, without the need to go outside the google network. The traffic,
therefore, never crosses the public internet.
https://medium.com/pablo-perez/k8s-externaltrafficpolicy-local-or-cluster-40b259a19404 https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-
balancing
clients in a VPC network connected to the LoadBalancer network using VPC Network Peering can also access the Service
https://cloud.google.com/kubernetes-engine/docs/how-to/service-parameters
NEW QUESTION 52
You have been asked to set up Object Lifecycle Management for objects stored in storage buckets. The objects are written once and accessed frequently for 30
days. After 30 days, the objects are not read again unless there is a special need. The object should be kept for three years, and you need to minimize cost. What
should you do?
A. Set up a policy that uses Nearline storage for 30 days and then moves to Archive storage for three years.
B. Set up a policy that uses Standard storage for 30 days and then moves to Archive storage for three years.
C. Set up a policy that uses Nearline storage for 30 days, then moves the Coldline for one year, and then moves to Archive storage for two years.
D. Set up a policy that uses Standard storage for 30 days, then moves to Coldline for one year, and then moves to Archive storage for two years.
Answer: B
Explanation:
The key to understand the requirement is : "The objects are written once and accessed frequently for 30 days" Standard Storage
Standard Storage is best for data that is frequently accessed ("hot" data) and/or stored for only brief periods of time.
Archive Storage
Archive Storage is the lowest-cost, highly durable storage service for data archiving, online backup, and disaster recovery. Unlike the "coldest" storage services
offered by other Cloud providers, your data is available within milliseconds, not hours or days. Archive Storage is the best choice for data that you plan to access
less than once a year.
https://cloud.google.com/storage/docs/storage-classes#standard
NEW QUESTION 55
Your company runs one batch process in an on-premises server that takes around 30 hours to complete. The task runs monthly, can be performed offline, and
must be restarted if interrupted. You want to migrate this workload to the cloud while minimizing cost. What should you do?
Answer: D
Explanation:
Install the workload in a compute engine VM, start and stop the instance as needed, because as per the question the VM runs for 30 hours, process can be
performed offline and should not be interrupted, if interrupted we need to restart the batch process again. Preemptible VMs are cheaper, but they will not be
available beyond 24hrs, and if the process gets interrupted the preemptible VM will restart.
NEW QUESTION 56
Your company uses a large number of Google Cloud services centralized in a single project. All teams have specific projects for testing and development. The
DevOps team needs access to all of the production services in order to perform their job. You want to prevent Google Cloud product changes from broadening
their permissions in the future. You want to follow Google-recommended practices. What should you do?
A. Grant all members of the DevOps team the role of Project Editor on the organization level.
B. Grant all members of the DevOps team the role of Project Editor on the production project.
C. Create a custom role that combines the required permission
D. Grant the DevOps team the custom role on the production project.
E. Create a custom role that combines the required permission
F. Grant the DevOps team the custom role on the organization level.
Answer: C
Explanation:
Understanding IAM custom roles
Key Point: Custom roles enable you to enforce the principle of least privilege, ensuring that the user and service accounts in your organization have only the
permissions essential to performing their intended functions.
Basic concepts
Custom roles are user-defined, and allow you to bundle one or more supported permissions to meet your specific needs. Custom roles are not maintained by
Google; when new permissions, features, or services are added to Google Cloud, your custom roles will not be updated automatically.
When you create a custom role, you must choose an organization or project to create it in. You can then grant the custom role on the organization or project, as
well as any resources within that organization or project.
https://cloud.google.com/iam/docs/understanding-custom-roles#basic_concepts
NEW QUESTION 61
You need to deploy an application, which is packaged in a container image, in a new project. The application exposes an HTTP endpoint and receives very few
requests per day. You want to minimize costs. What should you do?
Answer: A
Explanation:
Cloud Run takes any container images and pairs great with the container ecosystem: Cloud Build, Artifact Registry, Docker. ... No infrastructure to manage: once
deployed, Cloud Run manages your services so you can sleep well. Fast autoscaling. Cloud Run automatically scales up or down from zero to N depending on
traffic.
https://cloud.google.com/run
NEW QUESTION 62
For analysis purposes, you need to send all the logs from all of your Compute Engine instances to a BigQuery dataset called platform-logs. You have already
installed the Stackdriver Logging agent on all the instances. You want to minimize cost. What should you do?
A. 1. Give the BigQuery Data Editor role on the platform-logs dataset to the service accounts used by your instances.2. Update your instances’ metadata to add
the following value: logs-destination:bq://platform-logs.
B. 1. In Stackdriver Logging, create a logs export with a Cloud Pub/Sub topic called logs as a sink.2.Create a Cloud Function that is triggered by messages in the
logs topic.3. Configure that Cloud Function to drop logs that are not from Compute Engine and to insert Compute Engine logs in the platform-logs dataset.
C. 1. In Stackdriver Logging, create a filter to view only Compute Engine logs.2. Click Create Export.3.Choose BigQuery as Sink Service, and the platform-logs
dataset as Sink Destination.
D. 1. Create a Cloud Function that has the BigQuery User role on the platform-logs dataset.2. Configure this Cloud Function to create a BigQuery Job that
executes this query:INSERT INTOdataset.platform-logs (timestamp, log)SELECT timestamp, log FROM compute.logsWHERE timestamp>
DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY)3. Use Cloud Scheduler to trigger this Cloud Function once a day.
Answer: C
Explanation:
* 1. In Stackdriver Logging, create a filter to view only Compute Engine logs. 2. Click Create Export. 3. Choose BigQuery as Sink Service, and the platform-logs
dataset as Sink Destination.
NEW QUESTION 63
You have two subnets (subnet-a and subnet-b) in the default VPC. Your database servers are running in subnet-a. Your application servers and web servers are
running in subnet-b. You want to configure a firewall rule that only allows database traffic from the application servers to the database servers. What should you
do?
A. * Create service accounts sa-app and sa-db.• Associate service account: sa-app with the application servers and the service account sa-db with the database
servers.• Create an ingress firewall rule to allow network traffic from source service account sa-app to target service account sa-db.
B. • Create network tags app-server and db-server.• Add the app-server lag lo the application servers and the db-server lag to the database servers.• Create an
egress firewall rule to allow network traffic from source network tag app-server to target network tag db-server.
C. * Create a service account sa-app and a network tag db-server.* Associate the service account sa-app with the application servers and the network tag db-
server with the database servers.• Create an ingress firewall rule to allow network traffic from source VPC IP addresses and target the subnet-a IP addresses.
D. • Create a network lag app-server and service account sa-db.• Add the tag to the application servers and associate the service account with the database
servers.• Create an egress firewall rule to allow network traffic from source network tag app-server to target service account sa-db.
Answer: C
NEW QUESTION 65
You are given a project with a single virtual private cloud (VPC) and a single subnetwork in the us-central1 region. There is a Compute Engine instance hosting an
application in this subnetwork. You need to deploy a new instance in the same project in the europe-west1 region. This new instance needs access to the
application. You want to follow Google-recommended practices. What should you do?
A. 1. Create a subnetwork in the same VPC, in europe-west1.2. Create the new instance in the new subnetwork and use the first instance's private address as the
endpoint.
B. 1. Create a VPC and a subnetwork in europe-west1.2. Expose the application with an internal load balancer.3. Create the new instance in the new subnetwork
and use the load balancer's address as the endpoint.
C. 1. Create a subnetwork in the same VPC, in europe-west1.2. Use Cloud VPN to connect the two subnetworks.3. Create the new instance in the new
subnetwork and use the first instance's private address as the endpoint.
D. 1. Create a VPC and a subnetwork in europe-west1.2. Peer the 2 VPCs.3. Create the new instance in the new subnetwork and use the first instance's private
address as the endpoint.
Answer: C
Explanation:
Given that the new instance wants to access the application on the existing compute engine instance, these applications seem to be related so they should be
within the same VPC. It is possible to have them in different VPCs and peer the VPCs but this is a lot of additional work and we can simplify this by choosing the
option below (which is the answer)
* 1. Create a subnet in the same VPC, in europe-west1.
* 2. Create the new instance in the new subnet and use the first instance subnets private address as the endpoint. is the right answer.
We can create another subnet in the same VPC and this subnet is located in europe-west1. We can then spin up a new instance in this subnet. We also have
to set up a firewall rule to allow communication between the two subnets. All instances in the two subnets with the same VPC can communicate through the
internal IP Address
Ref: https://cloud.google.com/vpc
NEW QUESTION 66
You are managing several Google Cloud Platform (GCP) projects and need access to all logs for the past 60 days. You want to be able to explore and quickly
analyze the log contents. You want to follow Google- recommended practices to obtain the combined logs for all projects. What should you do?
Answer: B
Explanation:
Navigate to Stackdriver Logging and select resource.labels.project_id=*. is not right.
Log entries are held in Stackdriver Logging for a limited time known as the retention period which is 30 days (default configuration). After that, the entries are
deleted. To keep log entries longer, you need to export them outside of Stackdriver Logging by configuring log sinks.
Ref: https://cloud.google.com/blog/products/gcp/best-practices-for-working-with-google-cloud-audit-logging Configure a Cloud Scheduler job to read from
Stackdriver and store the logs in BigQuery. Configure the table expiration to 60 days. is not right.
While this works, it makes no sense to use Cloud Scheduler job to read from Stackdriver and store the logs in BigQuery when Google provides a feature (export
sinks) that does exactly the same thing and works out of the box.Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Create a Stackdriver Logging Export with a Sink destination to Cloud Storage. Create a lifecycle rule to delete objects after 60 days. is not right.
You can export logs by creating one or more sinks that include a logs query and an export destination. Supported destinations for exported log entries are Cloud
Storage, BigQuery, and
Pub/Sub.Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Sinks are limited to exporting log entries from the exact resource in which the sink was created: a Google Cloud project, organization, folder, or billing account. If it
makes it easier to exporting from all projects of an organication, you can create an aggregated sink that can export log entries from all the projects, folders, and
billing accounts of a Google Cloud
organization.Ref: https://cloud.google.com/logging/docs/export/aggregated_sinks
Either way, we now have the data in Cloud Storage, but querying logs information from Cloud Storage is harder than Querying information from BigQuery dataset.
For this reason, we should prefer Big Query over Cloud Storage.
Create a Stackdriver Logging Export with a Sink destination to a BigQuery dataset. Configure the table expiration to 60 days. is the right answer.
You can export logs by creating one or more sinks that include a logs query and an export destination. Supported destinations for exported log entries are Cloud
Storage, BigQuery, and
Pub/Sub.Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Sinks are limited to exporting log entries from the exact resource in which the sink was created: a Google Cloud project, organization, folder, or billing account. If it
makes it easier to exporting from all projects of an organication, you can create an aggregated sink that can export log entries from all the projects, folders, and
billing accounts of a Google Cloud
organization.Ref: https://cloud.google.com/logging/docs/export/aggregated_sinks
Either way, we now have the data in a BigQuery Dataset. Querying information from a Big Query dataset is easier and quicker than analyzing contents in Cloud
Storage bucket. As our requirement is to Quickly analyze the log contents, we should prefer Big Query over Cloud Storage.
Also, You can control storage costs and optimize storage usage by setting the default table expiration for newly created tables in a dataset. If you set the property
when the dataset is created, any table created in the dataset is deleted after the expiration period. If you set the property after the dataset is created, only new
tables are deleted after the expiration period.For example, if you set the default table expiration to 7 days, older data is automatically deleted after 1 week.Ref:
https://cloud.google.com/bigquery/docs/best-practices-storage
NEW QUESTION 71
You have successfully created a development environment in a project for an application. This application uses Compute Engine and Cloud SQL. Now, you need
to create a production environment for this application.
The security team has forbidden the existence of network routes between these 2 environments, and asks you to follow Google-recommended practices. What
should you do?
A. Create a new project, enable the Compute Engine and Cloud SQL APIs in that project, and replicate the setup you have created in the development
environment.
B. Create a new production subnet in the existing VPC and a new production Cloud SQL instance in your existing project, and deploy your application using those
resources.
C. Create a new project, modify your existing VPC to be a Shared VPC, share that VPC with your new project, and replicate the setup you have in the
development environment in that new project, in the Shared VPC.
D. Ask the security team to grant you the Project Editor role in an existing production project used by another division of your compan
E. Once they grant you that role, replicate the setup you have in the development environment in that project.
Answer: A
Explanation:
This aligns with Googles recommended practices. By creating a new project, we achieve complete isolation between development and production environments;
as well as isolate this production application from production applications of other departments.
Ref: https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#define-hierarchy
NEW QUESTION 74
You want to find out when users were added to Cloud Spanner Identity Access Management (IAM) roles on your Google Cloud Platform (GCP) project. What
should you do in the GCP Console?
Answer: D
Explanation:
https://cloud.google.com/monitoring/audit-logging
NEW QUESTION 75
You are building a multi-player gaming application that will store game information in a database. As the popularity of the application increases, you are concerned
about delivering consistent performance. You need to ensure an optimal gaming performance for global users, without increasing the management complexity.
What should you do?
A. Use Cloud SQL database with cross-region replication to store game statistics in the EU, US, and APAC regions.
B. Use Cloud Spanner to store user data mapped to the game statistics.
C. Use BigQuery to store game statistics with a Redis on Memorystore instance in the front to provide global consistency.
D. Store game statistics in a Bigtable database partitioned by username.
Answer: B
NEW QUESTION 79
You have an application that uses Cloud Spanner as a backend database. The application has a very predictable traffic pattern. You want to automatically scale up
or down the number of Spanner nodes depending on traffic. What should you do?
A. Create a cron job that runs on a scheduled basis to review stackdriver monitoring metrics, and then resize the Spanner instance accordingly.
B. Create a Stackdriver alerting policy to send an alert to oncall SRE emails when Cloud Spanner CPU exceeds the threshol
C. SREs would scale resources up or down accordingly.
D. Create a Stackdriver alerting policy to send an alert to Google Cloud Support email when Cloud Spanner CPU exceeds your threshol
E. Google support would scale resources up or down accordingly.
F. Create a Stackdriver alerting policy to send an alert to webhook when Cloud Spanner CPU is over or under your threshol
G. Create a Cloud Function that listens to HTTP and resizes Spanner resources accordingly.
Answer: D
Explanation:
As to mexblood1's point, CPU utilization is a recommended proxy for traffic when it comes to Cloud Spanner. See: Alerts for high CPU utilization The following
table specifies our recommendations for maximum CPU usage for both single-region and multi-region instances. These numbers are to ensure that your instance
has enough compute capacity to continue to serve your traffic in the event of the loss of an entire zone (for single-region instances) or an entire region (for multi-
region instances). - https://cloud.google.com/spanner/docs/cpu-utilization
NEW QUESTION 82
You are deploying a production application on Compute Engine. You want to prevent anyone from accidentally destroying the instance by clicking the wrong
button. What should you do?
Answer: D
Explanation:
Preventing Accidental VM Deletion This document describes how to protect specific VM instances from deletion by setting the deletionProtection property on an
Instance resource. To learn more about VM instances, read the Instances documentation. As part of your workload, there might be certain VM instances that are
critical to running your application or services, such as an instance running a SQL server, a server used as a license manager, and so on. These VM instances
might need to stay running indefinitely so you need a way to protect these VMs from being deleted. By setting the deletionProtection flag, a VM instance can be
protected from accidental deletion. If a user attempts to delete a VM instance for which you have set the deletionProtection flag, the request fails. Only a user that
has been granted a role with compute.instances.create permission can reset the flag to allow the resource to be deleted.
https://cloud.google.com/compute/docs/instances/preventing-accidental-vm-deletion
NEW QUESTION 86
You created an instance of SQL Server 2017 on Compute Engine to test features in the new version. You want to connect to this instance using the fewest number
of steps. What should you do?
Answer: D
Explanation:
https://cloud.google.com/compute/docs/instances/connecting-to-windows#remote-desktop-connection-app
https://cloud.google.com/compute/docs/instances/windows/generating-credentials https://cloud.google.com/compute/docs/instances/connecting-to-
windows#before-you-begin
NEW QUESTION 87
You need to deploy an application in Google Cloud using savorless technology. You want to test a new version of the application with a small percentage of
production traffic. What should you do?
Answer: A
NEW QUESTION 88
Your organization has user identities in Active Directory. Your organization wants to use Active Directory as their source of truth for identities. Your organization
wants to have full control over the Google accounts used by employees for all Google services, including your Google Cloud Platform (GCP) organization. What
should you do?
A. Use Google Cloud Directory Sync (GCDS) to synchronize users into Cloud Identity.
B. Use the cloud Identity APIs and write a script to synchronize users to Cloud Identity.
C. Export users from Active Directory as a CSV and import them to Cloud Identity via the Admin Console.
D. Ask each employee to create a Google account using self signu
E. Require that each employee use their company email address and password.
Answer: A
NEW QUESTION 91
You have an application that uses Cloud Spanner as a database backend to keep current state information about users. Cloud Bigtable logs all events triggered by
users. You export Cloud Spanner data to Cloud Storage during daily backups. One of your analysts asks you to join data from Cloud Spanner and Cloud Bigtable
for specific users. You want to complete this ad hoc request as efficiently as possible. What should you do?
A. Create a dataflow job that copies data from Cloud Bigtable and Cloud Storage for specific users.
B. Create a dataflow job that copies data from Cloud Bigtable and Cloud Spanner for specific users.
C. Create a Cloud Dataproc cluster that runs a Spark job to extract data from Cloud Bigtable and Cloud Storage for specific users.
D. Create two separate BigQuery external tables on Cloud Storage and Cloud Bigtabl
E. Use the BigQuery console to join these tables through user fields, and apply appropriate filters.
Answer: D
Explanation:
"The Cloud Spanner to Cloud Storage Text template is a batch pipeline that reads in data from a Cloud
Spanner table, optionally transforms the data via a JavaScript User Defined Function (UDF) that you provide, and writes it to Cloud Storage as CSV text files."
https://cloud.google.com/dataflow/docs/guides/templates/provided-batch#cloudspannertogcstext
"The Dataflow connector for Cloud Spanner lets you read data from and write data to Cloud Spanner in a Dataflow pipeline"
https://cloud.google.com/spanner/docs/dataflow-connector https://cloud.google.com/bigquery/external-data-sources
NEW QUESTION 95
You have a Linux VM that must connect to Cloud SQL. You created a service account with the appropriate access rights. You want to make sure that the VM uses
this service account instead of the default Compute Engine service account. What should you do?
A. When creating the VM via the web console, specify the service account under the ‘Identity and API Access’ section.
B. Download a JSON Private Key for the service accoun
C. On the Project Metadata, add that JSON as thevalue for the key compute-engine-service-account.
D. Download a JSON Private Key for the service accoun
E. On the Custom Metadata of the VM, add that JSON as the value for the key compute-engine-service-account.
F. Download a JSON Private Key for the service accoun
G. After creating the VM, ssh into the VM and save the JSON under ~/.gcloud/compute-engine-service-account.json.
Answer: A
NEW QUESTION 96
You need to add a group of new users to Cloud Identity. Some of the users already have existing Google accounts. You want to follow one of Google's
recommended practices and avoid conflicting accounts. What should you do?
Answer: A
Explanation:
https://cloud.google.com/architecture/identity/migrating-consumer-accounts
NEW QUESTION 99
Users of your application are complaining of slowness when loading the application. You realize the slowness is because the App Engine deployment serving the
application is deployed in us-central whereas all users of this application are closest to europe-west3. You want to change the region of the App Engine application
to europe-west3 to minimize latency. What’s the best way to change the App Engine region?
Answer: A
Explanation:
App engine is a regional service, which means the infrastructure that runs your app(s) is located in a specific region and is managed by Google to be redundantly
available across all the zones within that region. Once an app engine deployment is created in a region, it cant be changed. The only way is to create a new project
and create an App Engine instance in europe-west3, send all user traffic to this instance and delete the app engine instance in us-central.
Ref: https://cloud.google.com/appengine/docs/locations
Answer: C
A. Create a health check on port 443 and use that when creating the Managed Instance Group.
B. Select Multi-Zone instead of Single-Zone when creating the Managed Instance Group.
C. In the Instance Template, add the label ‘health-check’.
D. In the Instance Template, add a startup script that sends a heartbeat to the metadata server.
Answer: A
Explanation:
https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs#setting_up_an_autoheali
Answer: A
A. Set up a budget alert on the project with an amount of 100 dollars, a threshold of 100%, and notification type of “email.”
B. Set up a budget alert on the billing account with an amount of 100 dollars, a threshold of 100%, and notification type of “email.”
C. Export the billing data to BigQuer
D. Create a Cloud Function that uses BigQuery to sum the egress network costs of the exported billing data for the Apache web server for the current month and
sends an email if it is over 100 dollar
E. Schedule the Cloud Function using Cloud Scheduler to run hourly.
F. Use the Stackdriver Logging Agent to export the Apache web server logs to Stackdriver Logging.Create a Cloud Function that uses BigQuery to parse the HTTP
response log data in Stackdriver for thecurrent month and sends an email if the size of all HTTP responses, multiplied by current GCP egress prices, totals over
100 dollar
G. Schedule the Cloud Function using Cloud Scheduler to run hourly.
Answer: C
Explanation:
https://blog.doit-intl.com/the-truth-behind-google-cloud-egress-traffic-6e8f57b5c2f8
Answer: B
A. Deploy the new version in the same application and use the --migrate option.
B. Deploy the new version in the same application and use the --splits option to give a weight of 99 to the current version and a weight of 1 to the new version.
C. Create a new App Engine application in the same projec
D. Deploy the new version in that application.Use the App Engine library to proxy 1% of the requests to the new version.
E. Create a new App Engine application in the same projec
F. Deploy the new version in that application.Configure your network load balancer to send 1% of the traffic to that new application.
Answer: B
Explanation:
https://cloud.google.com/appengine/docs/standard/python/splitting-traffic#gcloud
A. • Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL permissions in one project within the Google Cloud organization.• Copy the role
across all projects created within the organization with the gcloud iam roles copy command.• Assign the role to developers in those projects.
B. • Add all developers to a Google group in Google Groups for Workspace.• Assign the predefined role of Compute Admin to the Google group at the Google
Cloud organization level.
C. • Add all developers to a Google group in Cloud Identity.• Assign predefined roles for Compute Engine, Cloud Functions, and Cloud SQL permissions to the
Google group for each project in the Google Cloud organization.
D. • Add all developers to a Google group in Cloud Identity.• Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL permissions at the
Google Cloud organization level.• Assign the custom role to the Google group.
Answer: D
Explanation:
https://www.cloudskillsboost.google/focuses/1035?parent=catalog#:~:text=custom%20role%20at%20the%20or
A. Review the Compute Engine activity logs Select and review the Admin Event logs
B. Review the Compute Engine activity logs Select and review the System Event logs
C. Install the Cloud Logging Agent In Cloud Logging review the Compute Engine syslog logs
D. Install the Cloud Logging Agent In Cloud Logging, review the Compute Engine operation logs
Answer: A
Answer: D
Explanation:
Cloud Functions is Google Cloud’s event-driven serverless compute platform that lets you run your code locally or in the cloud without having to provision servers.
Cloud Functions scales up or down, so you pay only for compute resources you use. Cloud Functions have excellent integration with Cloud Pub/Sub, lets you
scale down to zero and is recommended by Google as the ideal serverless platform to use when dependent on Cloud Pub/Sub."If you’re building a simple API (a
small set of functions to be accessed via HTTP or Cloud Pub/Sub), we recommend using Cloud Functions."Ref: https://cloud.google.com/serverless-options
A. Use gcloud to create the new project, and then deploy your application to the new project.
B. Use gcloud to create the new project and to copy the deployed application to the new project.
C. Create a Deployment Manager configuration file that copies the current App Engine deployment into a new project.
D. Deploy your application again using gcloud and specify the project parameter with the new project name to create the new project.
Answer: A
Explanation:
You can deploy to a different project by using –project flag.
By default, the service is deployed the current project configured via:
$ gcloud config set core/project PROJECT
To override this value for a single deployment, use the –project flag:
$ gcloud app deploy ~/my_app/app.yaml –project=PROJECT Ref: https://cloud.google.com/sdk/gcloud/reference/app/deploy
A. Use Cloud Logging filters to create log-based metrics for firewall and instance action
B. Monitor the changes and set up reasonable alerts.
C. Install Kibana on a compute Instanc
D. Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Pub/Su
E. Target the Pub/Sub topic to push messages to the Kibana instanc
F. Analyze the logs on Kibana in real time.
G. Turn on Google Cloud firewall rules logging, and set up alerts for any insert, update, or delete events.
H. Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Cloud Storage.Use BigQuery to periodically analyze log events in
the storage bucket.
Answer: A
Explanation:
This answer is the simplest and most effective way to monitor unexpected firewall changes and instance creation in Google Cloud. Cloud Logging filters allow you
to specify the criteria for the log entries that you want to view or export. You can use the Logging query language to write filters based on the LogEntry fields, such
as resource.type, severity, or protoPayload.methodName. For example, you can filter for firewall-related events by using the following query:
resource.type=“gce_subnetwork” logName=“projects/PROJECT_ID/logs/compute.googleapis.com%2Ffirewall”
You can filter for instance-related events by using the following query: resource.type=“gce_instance”
logName=“projects/PROJECT_ID/logs/compute.googleapis.com%2Factivity_log”
You can create log-based metrics from these filters to measure the rate or count of log entries that match the filter. Log-based metrics can be used to create charts
and dashboards in Cloud Monitoring, or to set up alerts based on the metric values. For example, you can create an alert policy that triggers when the log-based
metric for firewall changes exceeds a certain threshold in a given time interval. This way, you can get notified of any unexpected or malicious changes to your
firewall rules.
Option B is incorrect because it is unnecessarily complex and costly. Installing Kibana on a compute instance requires additional configuration and maintenance.
Creating a log sink to forward Cloud Audit Logs to Pub/Sub also incurs additional charges for the Pub/Sub service. Analyzing the logs on Kibana in real time may
not be feasible or efficient, as it requires constant monitoring and manual intervention.
Option C is incorrect because Google Cloud firewall rules logging is a different feature from Cloud Audit Logs. Firewall rules logging allows you to audit, verify, and
analyze the effects of your firewall rules by creating connection records for each rule that applies to traffic. However, firewall rules logging does not log the insert,
update, or delete events for the firewall rules themselves. Those events are logged by Cloud Audit Logs, which record the administrative activities in your Google
Cloud project.
Option D is incorrect because it is not a real-time solution. Creating a log sink to forward Cloud Audit Logs to Cloud Storage requires additional storage space and
charges. Using BigQuery to periodically analyze log events in the storage bucket also incurs additional costs for the BigQuery service. Moreover, this option does
not provide any alerting mechanism to notify you of any unexpected or malicious changes to your firewall rules or instances.
Answer: B
Explanation:
https://console.cloud.google.com/marketplace/details/gc-launcher-for-mongodb-atlas/mongodb-atlas
A. Create a Billing account, associate a payment method with it, and provide all project creators with permission to associate that billing account with their projects.
B. Grant all engineer’s permission to create their own billing accounts for each new project.
C. Apply for monthly invoiced billing, and have a single invoice tor the project paid by the finance team.
D. Create a billing account, associate it with a monthly purchase order (PO), and send the PO to Google Cloud.
Answer: A
Answer: D
A. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 1.
B. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 1.
C. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 2.
D. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 2.
Answer: A
Explanation:
https://cloud.google.com/compute/docs/autoscaler#specifications
Autoscaling works independently from autohealing. If you configure autohealing for your group and an instance fails the health check, the autohealer attempts to
recreate the instance. Recreating an instance can cause the number of instances in the group to fall below the autoscaling threshold (minNumReplicas) that you
specify.
Since we need the application running at all times, we need a minimum 1 instance.
Only a single instance of the VM should run, we need a maximum 1 instance.
We want the application running at all times. If the VM crashes due to any underlying hardware failure, we want another instance to be added to MIG so that
application can continue to serve requests. We can achieve this by enabling autoscaling. The only option that satisfies these three is Set autoscaling to On, set the
minimum number of instances to 1, and then set the maximum number of instances to 1.
Ref: https://cloud.google.com/compute/docs/autoscaler
Answer: C
A. Use the gsutil to rewrite the storage class for the bucket Change the default storage class for the bucket
B. Use the gsutil to rewrite the storage class for the bucket Set up Object Lifecycle management on the bucket
C. Create a new bucket and change the default storage class for the bucket Set up Object Lifecycle management on lite bucket
D. Create a new bucket and change the default storage class for the bucket import the files from the previous bucket into the new bucket
Answer: B
A. 1. Create an IAM entry for each data scientist's user account.2. Assign the BigQuery jobUser role to the group.
B. 1. Create an IAM entry for each data scientist's user account.2. Assign the BigQuery dataViewer user role to the group.
C. 1. Create a dedicated Google group in Cloud Identity.2. Add each data scientist's user account to the group.3. Assign the BigQuery jobUser role to the group.
D. 1. Create a dedicated Google group in Cloud Identity.2. Add each data scientist's user account to the group.3. Assign the BigQuery dataViewer user role to the
group.
Answer: C
Explanation:
Read the dataset's metadata and to list tables in the dataset. Read data and metadata from the dataset's tables. When applied at the project or organization level,
this role can also enumerate all datasets in the project. Additional roles, however, are necessary to allow the running of jobs.
BigQuery Data Viewer (roles/bigquery.dataViewer)
When applied to a table or view, this role provides permissions to: Read data and metadata from the table or view.
This role cannot be applied to individual models or routines. When applied to a dataset, this role provides permissions to: Read the dataset's metadata and list
tables in the dataset. Read data and metadata from the dataset's tables.
When applied at the project or organization level, this role can also enumerate all datasets in the project. Additional roles, however, are necessary to allow the
running of jobs.
Lowest-level resources where you can grant this role: Table
View
BigQuery Job User (roles/bigquery.jobUser)
Provides permissions to run jobs, including queries, within the project.
Lowest-level resources where you can grant this role:
Project
to run jobs https://cloud.google.com/bigquery/docs/access-control#bigquery.jobUser databaseUser needs additional role permission to run jobs
https://cloud.google.com/spanner/docs/iam#spanner.databaseUser
A. Use a Deployment Manager script to automate creating storage buckets in an appropriate region
B. Use a local SSD to improve performance of the VM for the targeted workload
C. Use the gsutii command to create a storage bucket in the same region as the VM
D. Use a Persistent Disk SSD in the same zone as the VM to improve performance of the VM
Answer: A
A. Your zonal capacity is limited, causing all preemptible VM's to be shutdown torecover capacit
B. Try deploying your group to another zone.
C. You have hit your instance quota for the region.
D. Your managed instance group's VM's are toggled to only last 1 minute inpreemptible settings.
E. Your managed instance group's health check is repeatedly failing, either to amisconfigured health check or misconfigured firewall rules not allowing the
healthcheck to access the instance
Answer: D
Explanation:
as the instances (normal or preemptible) would be terminated and relaunched if the health check fails either due to application not configured properly or the
instances firewall do not allow health check to happen.
GCP provides health check systems that connect to virtual machine (VM) instances on a configurable, periodic basis. Each connection attempt is called a probe.
GCP records the success or failure of each probe.
Health checks and load balancers work together. Based on a configurable number of sequential successful or failed probes, GCP computes an overall health state
for each VM in the load balancer. VMs that respond successfully for the configured number of times are considered healthy. VMs that fail to respond successfully
for a separate number of times are unhealthy.
GCP uses the overall health state of each VM to determine its eligibility for receiving new requests. In addition to being able to configure probe frequency and
health state thresholds, you can configure the criteria that define a successful probe.
A. Add the cluster’s API as a new Type Provider in Deployment Manager, and use the new type to create the DaemonSet.
B. Use the Deployment Manager Runtime Configurator to create a new Config resource that contains the DaemonSet definition.
C. With Deployment Manager, create a Compute Engine instance with a startup script that uses kubectl to create the DaemonSet.
D. In the cluster’s definition in Deployment Manager, add a metadata that has kube-system as key and the DaemonSet manifest as value.
Answer: A
Explanation:
Adding an API as a type provider
This page describes how to add an API to Google Cloud Deployment Manager as a type provider. To learn more about types and type providers, read the Types
overview documentation.
A type provider exposes all of the resources of a third-party API to Deployment Manager as base types that you can use in your configurations. These types must
be directly served by a RESTful API that supports Create, Read, Update, and Delete (CRUD).
If you want to use an API that is not automatically provided by Google with Deployment Manager, you must add the API as a type provider.
https://cloud.google.com/deployment-manager/docs/configuration/type-providers/creating-type-provider
A. Create an HTTP load balancer with a backend configuration that references an existing instance group.Set the health check to healthy (HTTP).
B. Create an HTTP load balancer with a backend configuration that references an existing instance group.Define a balancing mode and set the maximum RPS to
10.
C. Create a managed instance grou
D. Set the Autohealing health check to healthy (HTTP).
E. Create a managed instance grou
F. Verify that the autoscaling setting is on.
Answer: C
Explanation:
https://cloud.google.com/compute/docs/instance-groups
https://cloud.google.com/load-balancing/docs/network/transition-to-backend-services#console
In order to enable auto-healing, you need to group the instances into a managed instance group.
Managed instance groups (MIGs) maintain the high availability of your applications by proactively keeping your virtual machine (VM) instances available. An auto-
healing policy on the MIG relies on an application-based health check to verify that an application is responding as expected. If the auto-healer determines that an
application isnt responding, the managed instance group automatically recreates that instance.
It is important to use separate health checks for load balancing and for auto-healing. Health checks for load balancing can and should be more aggressive
because these health checks determine whether an instance receives user traffic. You want to catch non-responsive instances quickly, so you can redirect traffic if
necessary. In contrast, health checking for auto-healing causes Compute Engine to proactively replace failing instances, so this health check should be more
conservative than a load balancing health check.
A. * 1. Verify that you ace assigned the Billing Administrator IAM role tor your organization's Google Cloud Project for the Marketing department* 2. Link the new
project to a Marketing Billing Account
B. * 1. Verify that you are assigned the Billing Administrator IAM role for your organization's Google Cloud account* 2. Create a new Google Cloud Project for the
Marketing department* 3. Set the default key-value project labels to department marketing for all services in this project
C. * 1. Verify that you are assigned the Organization Administrator IAM role for your organization's Google Cloud account* 2. Create a new Google Cloud Project
for the Marketing department 3. Link the new project to a Marketing Billing Account.
D. * 1. Verity that you are assigned the Organization Administrator IAM role for your organization's Google Cloud account* 2. Create a new Google Cloud Project
for the Marketing department* 3. Set the default key value project labels to department marketing for all services in this protect
Answer: A
A. Create a folder to contain all the dev projects Create an organization policy to limit resources in US locations.
B. Create an organization to contain all the dev project
C. Create an Identity and Access Management (IAM) policy to limit the resources in US regions.
D. Create an Identity and Access Management <IAM) policy to restrict the resources locations in the U
E. Apply the policy to all dev projects.
F. Create an Identity and Access Management (IAM)policy to restrict the resources locations in all dev project
G. Apply the policy to all dev roles.
Answer: C
Visit Our Site to Purchase the Full Set of Actual Associate-Cloud-Engineer Exam Questions With Answers.
We Also Provide Practice Exam Software That Simulates Real Exam Environment And Has Many Self-Assessment Features. Order the
Associate-Cloud-Engineer Product From:
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/
* Associate-Cloud-Engineer Most Realistic Questions that Guarantee you a Pass on Your FirstTry
* Associate-Cloud-Engineer Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year