Associate Cloud Engineer - 0
Associate Cloud Engineer - 0
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/
NEW QUESTION 1
Your organization uses Active Directory (AD) to manage user identities. Each user uses this identity for federated access to various on-premises systems. Your
security team has adopted a policy that requires users to log into Google Cloud with their AD identity instead of their own login. You want to follow the
Google-recommended practices to implement this policy. What should you do?
A. Sync Identities with Cloud Directory Sync, and then enable SAML for single sign-on
B. Sync Identities in the Google Admin console, and then enable Oauth for single sign-on
C. Sync identities with 3rd party LDAP sync, and then copy passwords to allow simplified login with (he same credentials
D. Sync identities with Cloud Directory Sync, and then copy passwords to allow simplified login with the same credentials.
Answer: A
NEW QUESTION 2
Your coworker has helped you set up several configurations for gcloud. You've noticed that you're running commands against the wrong project. Being new to the
company, you haven't yet memorized any of the projects. With the fewest steps possible, what's the fastest way to switch to the correct configuration?
Answer: C
Explanation:
as gcloud config configurations list can help check for the existing configurations and activate can help switch to the configuration.
gcloud config configurations list lists existing named configurations
gcloud config configurations activate activates an existing named configuration
Obtains access credentials for your user account via a web-based authorization flow. When this command completes successfully, it sets the active account in the
current configuration to the account specified. If no configuration exists, it creates a configuration named default.
NEW QUESTION 3
Your learn wants to deploy a specific content management system (CMS) solution lo Google Cloud. You need a quick and easy way to deploy and install the
solution. What should you do?
Answer: B
NEW QUESTION 4
You are developing a new application and are looking for a Jenkins installation to build and deploy your source code. You want to automate the installation as
quickly and easily as possible. What should you do?
Answer: A
Explanation:
Installing Jenkins
In this section, you use Cloud Marketplace to provision a Jenkins instance. You customize this instance to use the agent image you created in the previous
section.
Go to the Cloud Marketplace solution for Jenkins. Click Launch on Compute Engine.
Change the Machine Type field to 4 vCPUs 15 GB Memory, n1-standard-4.
Machine type selection for Jenkins deployment.
Click Deploy and wait for your Jenkins instance to finish being provisioned. When it is finished, you will see: Jenkins has been deployed.
https://cloud.google.com/solutions/using-jenkins-for-distributed-builds-on-compute-engine#installing_jenkins
NEW QUESTION 5
You have a development project with appropriate IAM roles defined. You are creating a production project and want to have the same IAM roles on the new
project, using the fewest possible steps. What should you do?
A. Use gcloud iam roles copy and specify the production project as the destination project.
B. Use gcloud iam roles copy and specify your organization as the destination organization.
C. In the Google Cloud Platform Console, use the ‘create role from role’ functionality.
D. In the Google Cloud Platform Console, use the ‘create role’ functionality and select all applicable permissions.
Answer: A
NEW QUESTION 6
Your team is using Linux instances on Google Cloud. You need to ensure that your team logs in to these instances in the most secure and cost efficient way. What
should you do?
A. Attach a public IP to the instances and allow incoming connections from the internet on port 22 for SSH.
B. Use a third party tool to provide remote access to the instances.
C. Use the gcloud compute ssh command with the --tunnel-through-iap fla
D. Allow ingress traffic from the IP range 35.235.240.0/20 on port 22.
E. Create a bastion host with public internet acces
F. Create the SSH tunnel to the instance through the bastion host.
Answer: D
NEW QUESTION 7
Your company runs its Linux workloads on Compute Engine instances. Your company will be working with a new operations partner that does not use Google
Accounts. You need to grant access to the instances to your operations partner so they can maintain the installed tooling. What should you do?
A. Enable Cloud IAP for the Compute Engine instances, and add the operations partner as a Cloud IAP Tunnel User.
B. Tag all the instances with the same network ta
C. Create a firewall rule in the VPC to grant TCP access on port 22 for traffic from the operations partner to instances with the network tag.
D. Set up Cloud VPN between your Google Cloud VPC and the internal network of the operations partner.
E. Ask the operations partner to generate SSH key pairs, and add the public keys to the VM instances.
Answer: D
Explanation:
IAP controls access to your App Engine apps and Compute Engine VMs running on Google Cloud. It leverages user identity and the context of a request to
determine if a user should be allowed access. IAP is a building block toward BeyondCorp, an enterprise security model that enables employees to work from
untrusted networks without using a VPN.
By default, IAP uses Google identities and IAM. By leveraging Identity Platform instead, you can authenticate users with a wide range of external identity providers,
such as:
Email/password
OAuth (Google, Facebook, Twitter, GitHub, Microsoft, etc.) SAML
OIDC
Phone number Custom Anonymous
This is useful if your application is already using an external authentication system, and migrating your users to Google accounts is impractical.
https://cloud.google.com/iap/docs/using-tcp-forwarding#grant-permission
NEW QUESTION 8
You have an application on a general-purpose Compute Engine instance that is experiencing excessive disk read throttling on its Zonal SSD Persistent Disk. The
application primarily reads large files from disk. The disk size is currently 350 GB. You want to provide the maximum amount of throughput while minimizing costs.
What should you do?
Answer: C
Explanation:
Standard persistent disks are efficient and economical for handling sequential read/write operations, but they aren't optimized to handle high rates of random
input/output operations per second (IOPS). If your apps require high rates of random IOPS, use SSD persistent disks. SSD persistent disks are designed for single-
digit millisecond latencies. Observed latency is application specific.
NEW QUESTION 9
Every employee of your company has a Google account. Your operational team needs to manage a large number of instances on Compute Engine. Each member
of this team needs only administrative access to the servers. Your security team wants to ensure that the deployment of credentials is operationally efficient and
must be able to determine who accessed a given instance. What should you do?
Answer: C
Explanation:
https://cloud.google.com/compute/docs/instances/managing-instance-access
NEW QUESTION 10
You need to set a budget alert for use of Compute Engineer services on one of the three Google Cloud Platform projects that you manage. All three projects are
linked to a single billing account. What should you do?
Answer: A
Explanation:
https://cloud.google.com/iam/docs/understanding-roles#billing-roles
NEW QUESTION 10
Your company wants to standardize the creation and management of multiple Google Cloud resources using Infrastructure as Code. You want to minimize the
amount of repetitive code needed to manage the environment What should you do?
A. Create a bash script that contains all requirement steps as gcloud commands
B. Develop templates for the environment using Cloud Deployment Manager
C. Use curl in a terminal to send a REST request to the relevant Google API for each individual resource.
D. Use the Cloud Console interface to provision and manage all related resources
Answer: B
Explanation:
You can use Google Cloud Deployment Manager to create a set of Google Cloud resources and manage them as a unit, called a deployment. For example, if your
team's development environment needs two virtual machines (VMs) and a BigQuery database, you can define these resources in a configuration file, and use
Deployment Manager to create, change, or delete these resources. You can make the configuration file part of your team's code repository, so that anyone can
create the same environment with consistent results. https://cloud.google.com/deployment-manager/docs/quickstart
NEW QUESTION 13
You have 32 GB of data in a single file that you need to upload to a Nearline Storage bucket. The WAN connection you are using is rated at 1 Gbps, and you are
the only one on the connection. You want to use as much of the rated 1 Gbps as possible to transfer the file rapidly. How should you upload the file?
Answer: B
Explanation:
https://cloud.google.com/storage/docs/parallel-composite-uploads https://cloud.google.com/storage/docs/uploads-downloads#parallel-composite-uploads
NEW QUESTION 18
You are managing a Data Warehouse on BigQuery. An external auditor will review your company's processes, and multiple external consultants will need view
access to the data. You need to provide them with view access while following Google-recommended practices. What should you do?
Answer: D
NEW QUESTION 23
You have been asked to migrate a docker application from datacenter to cloud. Your solution architect has suggested uploading docker images to GCR in one
project and running an application in a GKE cluster in a separate project. You want to store images in the project img-278322 and run the application in the project
prod-278986. You want to tag the image as acme_track_n_trace:v1. You want to follow Google-recommended practices. What should you do?
Answer: B
Explanation:
Run gcloud builds submit tag gcr.io/img-278322/acme_track_n_trace:v1. is the right answer.
This command correctly tags the image as acme_track_n_trace:v1 and uploads the image to the img-278322 project.
Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit
NEW QUESTION 27
A company wants to build an application that stores images in a Cloud Storage bucket and wants to generate thumbnails as well as resize the images. They want
to use a google managed service that can scale up and scale down to zero automatically with minimal effort. You have been asked to recommend a service.
Which GCP service would you suggest?
Answer: C
Explanation:
Text Description automatically generated with low confidence
Cloud Functions is Google Cloud’s event-driven serverless compute platform. It automatically scales based on the load and requires no additional configuration.
You pay only for the resources used.
Ref: https://cloud.google.com/functions
While all other options i.e. Google Compute Engine, Google Kubernetes Engine, Google App Engine support autoscaling, it needs to be configured explicitly based
on the load and is not as trivial as the scale up or scale down offered by Google’s cloud functions.
NEW QUESTION 31
You have a web application deployed as a managed instance group. You have a new version of the application to gradually deploy. Your web application is
currently receiving live web traffic. You want to ensure that the available capacity does not decrease during the deployment. What should you do?
Answer: B
Explanation:
https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups#max_
NEW QUESTION 36
You installed the Google Cloud CLI on your workstation and set the proxy configuration. However, you are worried that your proxy credentials will be recorded in
the gcloud CLI logs. You want to prevent your proxy credentials from being logged What should you do?
A. Configure username and password by using gcloud configure set proxy/username and gcloud configure set proxy/ proxy/password commands.
B. Encode username and password in sha256 encoding, and save it to a text fil
C. Use filename as a value in the gcloud configure set core/custom_ca_certs_file command.
D. Provide values for CLOUDSDK_USERNAME and CLOUDSDK_PASSWORD in the gcloud CLI tool configure file.
E. Set the CLOUDSDK_PROXY_USERNAME and CLOUDSDK_PROXY PASSWORD properties by using environment variables in your command line tool.
Answer: D
NEW QUESTION 38
A colleague handed over a Google Cloud Platform project for you to maintain. As part of a security checkup, you want to review who has been granted the Project
Owner role. What should you do?
A. In the console, validate which SSH keys have been stored as project-wide keys.
B. Navigate to Identity-Aware Proxy and check the permissions for these resources.
C. Enable Audit Logs on the IAM & admin page for all resources, and validate the results.
D. Use the command gcloud projects get–iam–policy to view the current role assignments.
Answer: D
Explanation:
A simple approach would be to use the command flags available when listing all the IAM policy for a given project. For instance, the following command: `gcloud
projects get-iam-policy $PROJECT_ID
--flatten="bindings[].members" --format="table(bindings.members)" --filter="bindings.role:roles/owner"`
outputs all the users and service accounts associated with the role ‘roles/owner’ in the project in question. https://groups.google.com/g/google-cloud-
dev/c/Z6sZs7TvygQ?pli=1
NEW QUESTION 40
You are using Data Studio to visualize a table from your data warehouse that is built on top of BigQuery. Data is appended to the data warehouse during the day.
At night, the daily summary is recalculated by overwriting the table. You just noticed that the charts in Data Studio are broken, and you want to analyze the
problem. What should you do?
A. Use the BigQuery interface to review the nightly Job and look for any errors
B. Review the Error Reporting page in the Cloud Console to find any errors.
C. In Cloud Logging create a filter for your Data Studio report
D. Use the open source CLI too
E. Snapshot Debugger, to find out why the data was not refreshed correctly.
Answer: D
Explanation:
Cloud Debugger helps inspect the state of an application, at any code location, without stopping or slowing down the running app //
https://cloud.google.com/debugger/docs
NEW QUESTION 43
Your projects incurred more costs than you expected last month. Your research reveals that a development
GKE container emitted a huge number of logs, which resulted in higher costs. You want to disable the logs quickly using the minimum number of steps. What
should you do?
A. 1. Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE container resource.
B. 1. Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE Cluster Operations resource.
C. 1. Go to the GKE console, and delete existing clusters.2. Recreate a new cluster.3. Clear the option to enable legacy Stackdriver Logging.
D. 1. Go to the GKE console, and delete existing clusters.2. Recreate a new cluster.3. Clear the option to enable legacy Stackdriver Monitoring.
Answer: A
Explanation:
https://cloud.google.com/logging/docs/api/v2/resource-list GKE Containers have more log than GKE Cluster Operations:
-GKE Containe:
cluster_name: An immutable name for the cluster the container is running in. namespace_id: Immutable ID of the cluster namespace the container is running in.
instance_id: Immutable ID of the GCE instance the container is running in. pod_id: Immutable ID of the pod the container is running in.
container_name: Immutable name of the container. zone: The GCE zone in which the instance is running. VS -GKE Cluster Operations
project_id: The identifier of the GCP project associated with this resource, such as "my-project". cluster_name: The name of the GKE Cluster.
location: The location in which the GKE Cluster is running.
NEW QUESTION 46
You are using Google Kubernetes Engine with autoscaling enabled to host a new application. You want to expose this new application to the public, using HTTPS
on a public IP address. What should you do?
A. Create a Kubernetes Service of type NodePort for your application, and a Kubernetes Ingress to expose this Service via a Cloud Load Balancer.
B. Create a Kubernetes Service of type ClusterIP for your applicatio
C. Configure the public DNS name of your application using the IP of this Service.
D. Create a Kubernetes Service of type NodePort to expose the application on port 443 of each node of the Kubernetes cluste
E. Configure the public DNS name of your application with the IP of every node of the cluster to achieve load-balancing.
F. Create a HAProxy pod in the cluster to load-balance the traffic to all the pods of the application.Forward the public traffic to HAProxy with an iptable rul
G. Configure the DNS name of your application using the public IP of the node HAProxy is running on.
Answer: A
NEW QUESTION 51
You are running multiple VPC-native Google Kubernetes Engine clusters in the same subnet. The IPs available for the nodes are exhausted, and you want to
ensure that the clusters can grow in nodes when needed. What should you do?
A. Create a new subnet in the same region as the subnet being used.
B. Add an alias IP range to the subnet used by the GKE clusters.
C. Create a new VPC, and set up VPC peering with the existing VPC.
D. Expand the CIDR range of the relevant subnet for the cluster.
Answer: D
Explanation:
gcloud compute networks subnets expand-ip-range NAME gcloud compute networks subnets expand-ip-range
- expand the IP range of a Compute Engine subnetwork https://cloud.google.com/sdk/gcloud/reference/compute/networks/subnets/expand-ip-range
NEW QUESTION 56
You have been asked to create robust Virtual Private Network (VPN) connectivity between a new Virtual Private Cloud (VPC) and a remote site. Key requirements
include dynamic routing, a shared address space of 10.19.0.1/22, and no overprovisioning of tunnels during a failover event. You want to follow
Google-recommended practices to set up a high availability Cloud VPN. What should you do?
A. Use a custom mode VPC network, configure static routes, and use active/passive routing
B. Use an automatic mode VPC network, configure static routes, and use active/active routing
C. Use a custom mode VPC network use Cloud Router border gateway protocol (86P) routes, and use active/passive routing
D. Use an automatic mode VPC network, use Cloud Router border gateway protocol (BGP) routes and configure policy-based routing
Answer: C
Explanation:
https://cloud.google.com/network-connectivity/docs/vpn/concepts/best-practices
NEW QUESTION 57
Your auditor wants to view your organization's use of data in Google Cloud. The auditor is most interested in auditing who accessed data in Cloud Storage
buckets. You need to help the auditor access the data they need. What should you do?
A. Assign the appropriate permissions, and then use Cloud Monitoring to review metrics
B. Use the export logs API to provide the Admin Activity Audit Logs in the format they want
C. Turn on Data Access Logs for the buckets they want to audit, and Then build a query in the log viewer that filters on Cloud Storage
D. Assign the appropriate permissions, and then create a Data Studio report on Admin Activity Audit Logs
Answer: C
Explanation:
Types of audit logs Cloud Audit Logs provides the following audit logs for each Cloud project, folder, and organization: Admin Activity audit logs Data Access audit
logs System Event audit logs Policy Denied audit logs ***Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as
user-driven API calls that create, modify, or read user-provided resource data. https://cloud.google.com/logging/docs/audit#types
https://cloud.google.com/logging/docs/audit#data-access Cloud Storage: When Cloud Storage usage logs are enabled, Cloud Storage writes usage data to the
Cloud Storage bucket, which generates Data Access audit logs for the bucket. The generated Data Access audit log has its caller identity redacted.
NEW QUESTION 62
You are creating a Google Kubernetes Engine (GKE) cluster with a cluster autoscaler feature enabled. You need to make sure that each node of the cluster will
run a monitoring pod that sends container metrics to a third-party monitoring solution. What should you do?
Answer: B
Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset#usage_patterns
DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets
automatically add Pods to the new nodes as needed.
In GKE, DaemonSets manage groups of replicated Pods and adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you
add nodes to a node pool, DaemonSets automatically add Pods to the new nodes as needed. So, this is a perfect fit for our monitoring pod.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples
of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons like collectd. For example, you could have
DaemonSets for each type of daemon run on all of your nodes. Alternatively, you could run multiple DaemonSets for a single type of daemon, but have them use
different configurations for different hardware types and resource needs.
NEW QUESTION 64
You are building an application that will run in your data center. The application will use Google Cloud Platform (GCP) services like AutoML. You created a service
account that has appropriate access to AutoML. You need to enable authentication to the APIs from your on-premises environment. What should you do?
Answer: B
NEW QUESTION 67
Your company uses Cloud Storage to store application backup files for disaster recovery purposes. You want to follow Google’s recommended practices. Which
storage option should you use?
A. Multi-Regional Storage
B. Regional Storage
C. Nearline Storage
D. Coldline Storage
Answer: D
NEW QUESTION 71
You are in charge of provisioning access for all Google Cloud users in your organization. Your company recently acquired a startup company that has their own
Google Cloud organization. You need to ensure that your Site Reliability Engineers (SREs) have the same project permissions in the startup company's
organization as in your own organization. What should you do?
A. In the Google Cloud console for your organization, select Create role from selection, and choose destination as the startup company's organization
B. In the Google Cloud console for the startup company, select Create role from selection and choose source as the startup company's Google Cloud organization.
C. Use the gcloud iam roles copy command, and provide the Organization ID of the startup company's Google Cloud Organization as the destination.
D. Use the gcloud iam roles copy command, and provide the project IDs of all projects in the startup company s organization as the destination.
Answer: C
Explanation:
https://cloud.google.com/architecture/best-practices-vpc-design#shared-service Cloud VPN is another alternative. Because Cloud VPN establishes reachability
through managed IPsec tunnels, it doesn't have the aggregate limits of VPC Network Peering. Cloud VPN uses a VPN Gateway for connectivity and doesn't
consider the aggregate resource use of the IPsec peer. The drawbacks of Cloud VPN include increased costs (VPN tunnels and traffic egress), management
overhead required to maintain tunnels, and the performance overhead of IPsec.
NEW QUESTION 72
You have a single binary application that you want to run on Google Cloud Platform. You decided to automatically scale the application based on underlying
infrastructure CPU usage. Your organizational policies require you to use virtual machines directly. You need to ensure that the application scaling is operationally
efficient and completed as quickly as possible. What should you do?
A. Create a Google Kubernetes Engine cluster, and use horizontal pod autoscaling to scale the application.
B. Create an instance template, and use the template in a managed instance group with autoscaling configured.
C. Create an instance template, and use the template in a managed instance group that scales up and down based on the time of day.
D. Use a set of third-party tools to build automation around scaling the application up and down, based on Stackdriver CPU usage monitoring.
Answer: B
Explanation:
Managed instance groups offer autoscaling capabilities that let you automatically add or delete instances from a managed instance group based on increases or
decreases in load (CPU Utilization in this case). Autoscaling helps your apps gracefully handle increases in traffic and reduce costs when the need for resources is
lower. You define the autoscaling policy and the autoscaler performs automatic scaling based on the measured load (CPU Utilization in this case). Autoscaling
works by adding more instances to your instance group when there is more load (upscaling), and deleting instances when the need for instances is lowered
(downscaling). Ref: https://cloud.google.com/compute/docs/autoscaler
NEW QUESTION 76
Your managed instance group raised an alert stating that new instance creation has failed to create new instances. You need to maintain the number of running
instances specified by the template to be able to process expected application traffic. What should you do?
A. Create an instance template that contains valid syntax which will be used by the instance grou
B. Delete any persistent disks with the same name as instance names.
C. Create an instance template that contains valid syntax that will be used by the instance grou
D. Verify that the instance name and persistent disk name values are not the same in the template.
E. Verify that the instance template being used by the instance group contains valid synta
F. Delete any persistent disks with the same name as instance name
G. Set the disks.autoDelete property to true in the instance template.
H. Delete the current instance template and replace it with a new instance templat
I. Verify that the instance name and persistent disk name values are not the same in the templat
J. Set the disks.autoDelete property to true in the instance template.
Answer: A
Explanation:
https://cloud.google.com/compute/docs/troubleshooting/troubleshooting-migs https://cloud.google.com/compute/docs/instance-
templates#how_to_update_instance_templates
NEW QUESTION 80
You are building an application that processes data files uploaded from thousands of suppliers. Your primary goals for the application are data security and the
expiration of aged data. You need to design the application to:
•Restrict access so that suppliers can access only their own data.
•Give suppliers write access to data only for 30 minutes.
•Delete data that is over 45 days old.
You have a very short development cycle, and you need to make sure that the application requires minimal maintenance. Which two strategies should you use?
(Choose two.)
Answer: AB
Explanation:
(A) Object Lifecycle Management Delete
The Delete action deletes an object when the object meets all conditions specified in the lifecycle rule.
Exception: In buckets with Object Versioning enabled, deleting the live version of an object causes it to become a noncurrent version, while deleting a noncurrent
version deletes that version permanently.
https://cloud.google.com/storage/docs/lifecycle#delete
(B) Signed URLs
This page provides an overview of signed URLs, which you use to give time-limited resource access to anyone in possession of the URL, regardless of whether
they have a Google account
https://cloud.google.com/storage/docs/access-control/signed-urls
NEW QUESTION 81
You are hosting an application on bare-metal servers in your own data center. The application needs access to Cloud Storage. However, security policies prevent
the servers hosting the application from having public IP addresses or access to the internet. You want to follow Google-recommended practices to provide the
application with access to Cloud Storage. What should you do?
A. 1. Use nslookup to get the IP address for storage.googleapis.com.2. Negotiate with the security team to be able to give a public IP address to the servers.3.
Only allow egress traffic from those servers to the IP addresses for storage.googleapis.com.
B. 1. Using Cloud VPN, create a VPN tunnel to a Virtual Private Cloud (VPC) in Google Cloud Platform (GCP).2. In this VPC, create a Compute Engine instance
and install the Squid proxy server on this instance.3. Configure your servers to use that instance as a proxy to access Cloud Storage.
C. 1. Use Migrate for Compute Engine (formerly known as Velostrata) to migrate those servers to Compute Engine.2. Create an internal load balancer (ILB) that
uses storage.googleapis.com as backend.3. Configure your new instances to use this ILB as proxy.
D. 1. Using Cloud VPN or Interconnect, create a tunnel to a VPC in GCP.2. Use Cloud Router to create a custom route advertisement for 199.36.153.4/30.
Announce that network to your on-premises network through the VPN tunnel.3. In your on-premises network, configure your DNS server to
resolve*.googleapis.com as a CNAME to restricted.googleapis.com.
Answer: D
Explanation:
Our requirement is to follow Google recommended practices to achieve the end result. Configuring Private Google Access for On-Premises Hosts is best achieved
by VPN/Interconnect + Advertise Routes + Use restricted Google IP Range.
Using Cloud VPN or Interconnect, create a tunnel to a VPC in GCP
Using Cloud Router to create a custom route advertisement for 199.36.153.4/30. Announce that network to your on-premises network through the VPN tunnel.
In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com is the right answer right, and it
is what Google recommends.
Ref: https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid
You must configure routes so that Google API traffic is forwarded through your Cloud VPN or Cloud Interconnect connection, firewall rules on your on-premises
firewall to allow the outgoing traffic, and DNS so that traffic to Google APIs resolves to the IP range youve added to your routes.
You can use Cloud Router Custom Route Advertisement to announce the Restricted Google APIs IP addresses through Cloud Router to your on-premises
network. The Restricted Google APIs IP range is 199.36.153.4/30. While this is technically a public IP range, Google does not announce it publicly. This IP range
is only accessible to hosts that can reach your Google Cloud projects through internal IP ranges, such as through a Cloud VPN or Cloud Interconnect connection.
Without having a public IP address or access to the internet, the only way you could connect to cloud storage is if you have an internal route to it.
So Negotiate with the security team to be able to give public IP addresses to the servers is not right.
Following Google recommended practices is synonymous with using Googles services (Not quite, but it is at least for the exam !!).
So In this VPC, create a Compute Engine instance and install the Squid proxy server on this instance is not right.
Migrating the VM to Compute Engine is a bit drastic when Google says it is perfectly fine to have Hybrid Connectivity architectures
https://cloud.google.com/hybrid-connectivity.
So,
Use Migrate for Compute Engine (formerly known as Velostrata) to migrate these servers to Compute Engine is not right.
NEW QUESTION 83
Your customer has implemented a solution that uses Cloud Spanner and notices some read latency-related performance issues on one table. This table is
accessed only by their users using a primary key. The table schema is shown below.
A. Option A
B. Option B
C. Option C
D. Option D
Answer: C
Explanation:
As mentioned in Schema and data model, you should be careful when choosing a primary key to not accidentally create hotspots in your database. One cause of
hotspots is having a column whose value monotonically increases as the first key part, because this results in all inserts occurring at the end of your key space.
This pattern is undesirable because Cloud Spanner divides data among servers by key ranges, which means all your inserts will be directed at a single server that
will end up doing all the work. https://cloud.google.com/spanner/docs/schema-design#primary-key-prevent-hotspots
NEW QUESTION 86
You need to create a new billing account and then link it with an existing Google Cloud Platform project. What should you do?
A. Verify that you are Project Billing Manager for the GCP projec
B. Update the existing project to link it to the existing billing account.
C. Verify that you are Project Billing Manager for the GCP projec
D. Create a new billing account and link the new billing account to the existing project.
E. Verify that you are Billing Administrator for the billing accoun
F. Create a new project and link the new project to the existing billing account.
G. Verify that you are Billing Administrator for the billing accoun
H. Update the existing project to link it to the existing billing account.
Answer: B
Explanation:
Billing Administrators can not create a new billing account, and the project is presumably already created. Project Billing Manager allows you to link the created
billing account to the project. It is vague on how the billing account gets created but by process of elimination
NEW QUESTION 89
You have deployed multiple Linux instances on Compute Engine. You plan on adding more instances in the coming weeks. You want to be able to access all of
these instances through your SSH client over me Internet without having to configure specific access on the existing and new instances. You do not want the
Compute Engine instances to have a public IP. What should you do?
Answer: B
Explanation:
https://cloud.google.com/iap/docs/using-tcp-forwarding
NEW QUESTION 90
You need to enable traffic between multiple groups of Compute Engine instances that are currently running two different GCP projects. Each group of Compute
Engine instances is running in its own VPC. What should you do?
Answer: B
Explanation:
Shared VPC allows an organization to connect resources from multiple projects to a common Virtual Private Cloud (VPC) network, so that they can communicate
with each other securely and efficiently using internal IPs from that network. When you use Shared VPC, you designate a project as a host project and attach one
or more other service projects to it. The VPC networks in the host project are called Shared VPC networks. Eligible resources from service projects can use
subnets in the Shared VPC network
https://cloud.google.com/vpc/docs/shared-vpc
"For example, an existing instance in a service project cannot be reconfigured to use a Shared VPC network, but a new instance can be created to use available
subnets in a Shared VPC network."
NEW QUESTION 92
Your company developed an application to deploy on Google Kubernetes Engine. Certain parts of the
application are not fault-tolerant and are allowed to have downtime Other parts of the application are critical and must always be available. You need to configure a
Goorj e Kubernfl:es Engine duster while optimizing for cost. What should you do?
Answer: C
NEW QUESTION 96
You are configuring Cloud DNS. You want !to create DNS records to point home.mydomain.com, mydomain.com. and www.mydomain.com to the IP address of
your Google Cloud load balancer. What should you do?
A. Create one CNAME record to point mydomain.com to the load balancer, and create two A records to point WWW and HOME lo mydomain.com respectively.
B. Create one CNAME record to point mydomain.com to the load balancer, and create two AAAA records to point WWW and HOME to mydomain.com
respectively.
C. Create one A record to point mydomain.com to the load balancer, and create two CNAME records to point WWW and HOME to mydomain.com respectively.
D. Create one A record to point mydomain.com lo the load balancer, and create two NS records to point WWW and HOME to mydomain.com respectively.
Answer: C
NEW QUESTION 97
Your company has a 3-tier solution running on Compute Engine. The configuration of the current infrastructure is shown below.
Each tier has a service account that is associated with all instances within it. You need to enable communication on TCP port 8080 between tiers as follows:
• Instances in tier #1 must communicate with tier #2.
• Instances in tier #2 must communicate with tier #3.
What should you do?
A. 1. Create an ingress firewall rule with the following settings:• Targets: all instances• Source filter: IP ranges (with the range set to 10.0.2.0/24)• Protocols: allow
all2. Create an ingress firewall rule with the following settings:• Targets: all instances• Source filter: IP ranges (with the range set to 10.0.1.0/24)•Protocols: allow
all
B. 1. Create an ingress firewall rule with the following settings:• Targets: all instances with tier #2 service account• Source filter: all instances with tier #1 service
account• Protocols: allow TCP:80802. Create an ingress firewall rule with the following settings:• Targets: all instances with tier #3 service account• Source filter:
all instances with tier #2 service account• Protocols: allow TCP: 8080
C. 1. Create an ingress firewall rule with the following settings:• Targets: all instances with tier #2 service account• Source filter: all instances with tier #1 service
account• Protocols: allow all2. Create an ingress firewall rule with the following settings:• Targets: all instances with tier #3 service account• Source filter: all
instances with tier #2 service account• Protocols: allow all
D. 1. Create an egress firewall rule with the following settings:• Targets: all instances• Source filter: IP ranges (with the range set to 10.0.2.0/24)• Protocols: allow
TCP: 80802. Create an egress firewall rule with the following settings:• Targets: all instances• Source filter: IP ranges (with the range set to 10.0.1.0/24)•
Protocols: allow TCP: 8080
Answer: B
Explanation:
* 1. Create an ingress firewall rule with the following settings: "¢ Targets: all instances with tier #2 service account "¢ Source filter: all instances with tier #1 service
account "¢ Protocols: allow TCP:8080 2. Create an ingress firewall rule with the following settings: "¢ Targets: all instances with tier #3 service account "¢ Source
filter: all instances with tier #2 service account "¢ Protocols: allow TCP: 8080
A. Fill in local SS
Answer: A
Explanation:
https://cloud.google.com/compute/docs/disks/local-ssd
A. * 1. Create a Cloud Monitoring Dashboard* 2. Collect metrics and publish them into the Pub/Sub topics 3. Add CPU and network Charts (or each of (he three
projects
B. * 1. Create a Cloud Monitoring Dashboard.* 2. Select the CPU and Network metrics from the three projects.* 3. Add CPU and network Charts lot each of the
three protects.
C. * 1 Create a Service Account and apply roles/viewer on the three projects* 2. Collect metrics and publish them lo the Cloud Monitoring API* 3. Add CPU and
network Charts for each of the three projects.
D. * 1. Create a fourth Google Cloud project* 2 Create a Cloud Workspace from the fourth project and add the other three projects
Answer: B
Answer: C
Answer: A
A. Use the gcloud CLI services enable cloudresourcemanager.googleapis.com command to enable all resources.
B. Use the gcloud services enable compute.googleapis.com command to enable Compute Engine and the gcloud services enable storage-api.googleapis.com
command to enable the Cloud Storage APIs.
C. Open the Google Cloud console and enable all Google Cloud APIs from the API dashboard.
D. Open the Google Cloud console and run gcloud init --project <project-id> in a Cloud Shell.
Answer: B
A. * Create service accounts sa-app and sa-db.• Associate service account: sa-app with the application servers and the service account sa-db with the database
servers.• Create an ingress firewall rule to allow network traffic from source service account sa-app to target service account sa-db.
B. • Create network tags app-server and db-server.• Add the app-server lag lo the application servers and the db-server lag to the database servers.• Create an
egress firewall rule to allow network traffic from source network tag app-server to target network tag db-server.
C. * Create a service account sa-app and a network tag db-server.* Associate the service account sa-app with the application servers and the network tag db-
server with the database servers.• Create an ingress firewall rule to allow network traffic from source VPC IP addresses and target the subnet-a IP addresses.
D. • Create a network lag app-server and service account sa-db.• Add the tag to the application servers and associate the service account with the database
servers.• Create an egress firewall rule to allow network traffic from source network tag app-server to target service account sa-db.
Answer: C
A. Export your bill to a Cloud Storage bucket, and then import into Cloud Bigtable for analysis.
B. Export your bill to a Cloud Storage bucket, and then import into Google Sheets for analysis.
C. Export your transactions to a local file, and perform analysis with a desktop tool.
D. Export your bill to a BigQuery dataset, and then write time window-based SQL queries for analysis.
Answer: D
Explanation:
"...we recommend that you enable Cloud Billing data export to BigQuery at the same time that you create a Cloud Billing account. "
https://cloud.google.com/billing/docs/how-to/export-data-bigquery
https://medium.com/google-cloud/analyzing-google-cloud-billing-data-with-big-query-30bae1c2aae4
A. Create a Data Studio dashboard that uses the related BigQuery tables as a source and give the BI team view access to the Data Studio dashboard.
B. Create a Service Account for the BI team and distribute a new private key to each member of the BI team.
C. Use Cloud Scheduler to schedule a batch Dataflow job to copy the data from BigQuery to the BI team's internal data warehouse.
D. Assign the IAM role of BigQuery User to a Google Group that contains the members of the BI team.
Answer: D
Explanation:
When applied to a dataset, this role provides the ability to read the dataset's metadata and list tables in the dataset. When applied to a project, this role also
provides the ability to run jobs, including queries, within the project. A member with this role can enumerate their own jobs, cancel their own jobs, and enumerate
datasets within a project. Additionally, allows the creation of new datasets within the project; the creator is granted the BigQuery Data Owner role
(roles/bigquery.dataOwner) on these new datasets.
https://cloud.google.com/bigquery/docs/access-control
Answer: C
Explanation:
Keywords, Financial data (large data) used globally, data stored and queried using relational structure (SQL), clients should get exact identical copies(Strong
Consistency), Multiple region, low latency to end user, select storage option to minimize latency.
Answer: B
D. Write a Cloud Dataflow job that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find
employee_ssn column.
Answer: A
Explanation:
https://cloud.google.com/bigquery/docs/quickstarts/quickstart-web-ui?authuser=4
A. Update your web application to use the protocol HTTP/2 instead of HTTP/1.1
B. Set the concurrency number to 1 for your Cloud Run service.
C. Set the maximum number of instances for your Cloud Run service to 100.
D. Set the minimum number of instances for your Cloud Run service to 3.
Answer: D
Answer: B
Explanation:
Navigate to Stackdriver Logging and select resource.labels.project_id=*. is not right.
Log entries are held in Stackdriver Logging for a limited time known as the retention period which is 30 days (default configuration). After that, the entries are
deleted. To keep log entries longer, you need to export them outside of Stackdriver Logging by configuring log sinks.
Ref: https://cloud.google.com/blog/products/gcp/best-practices-for-working-with-google-cloud-audit-logging Configure a Cloud Scheduler job to read from
Stackdriver and store the logs in BigQuery. Configure the table expiration to 60 days. is not right.
While this works, it makes no sense to use Cloud Scheduler job to read from Stackdriver and store the logs in BigQuery when Google provides a feature (export
sinks) that does exactly the same thing and works out of the box.Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Create a Stackdriver Logging Export with a Sink destination to Cloud Storage. Create a lifecycle rule to delete objects after 60 days. is not right.
You can export logs by creating one or more sinks that include a logs query and an export destination. Supported destinations for exported log entries are Cloud
Storage, BigQuery, and
Pub/Sub.Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Sinks are limited to exporting log entries from the exact resource in which the sink was created: a Google Cloud project, organization, folder, or billing account. If it
makes it easier to exporting from all projects of an organication, you can create an aggregated sink that can export log entries from all the projects, folders, and
billing accounts of a Google Cloud
organization.Ref: https://cloud.google.com/logging/docs/export/aggregated_sinks
Either way, we now have the data in Cloud Storage, but querying logs information from Cloud Storage is harder than Querying information from BigQuery dataset.
For this reason, we should prefer Big Query over Cloud Storage.
Create a Stackdriver Logging Export with a Sink destination to a BigQuery dataset. Configure the table expiration to 60 days. is the right answer.
You can export logs by creating one or more sinks that include a logs query and an export destination. Supported destinations for exported log entries are Cloud
Storage, BigQuery, and
Pub/Sub.Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Sinks are limited to exporting log entries from the exact resource in which the sink was created: a Google Cloud project, organization, folder, or billing account. If it
makes it easier to exporting from all projects of an organication, you can create an aggregated sink that can export log entries from all the projects, folders, and
billing accounts of a Google Cloud
organization.Ref: https://cloud.google.com/logging/docs/export/aggregated_sinks
Either way, we now have the data in a BigQuery Dataset. Querying information from a Big Query dataset is easier and quicker than analyzing contents in Cloud
Storage bucket. As our requirement is to Quickly analyze the log contents, we should prefer Big Query over Cloud Storage.
Also, You can control storage costs and optimize storage usage by setting the default table expiration for newly created tables in a dataset. If you set the property
when the dataset is created, any table created in the dataset is deleted after the expiration period. If you set the property after the dataset is created, only new
tables are deleted after the expiration period.For example, if you set the default table expiration to 7 days, older data is automatically deleted after 1 week.Ref:
https://cloud.google.com/bigquery/docs/best-practices-storage
A. Meet with the cloud enablement team to discuss load balancer options.
B. Redesign the application to use a distributed user session service that does not rely on WebSockets and HTTP sessions.
C. Review the encryption requirements for WebSocket connections with the security team.
D. Convert the WebSocket code to use HTTP streaming.
Answer: A
Explanation:
Google HTTP(S) Load Balancing has native support for the WebSocket protocol when you use HTTP or HTTPS, not HTTP/2, as the protocol to the backend.
Ref: https://cloud.google.com/load-balancing/docs/https#websocket_proxy_support
We dont need to convert WebSocket code to use HTTP streaming or Redesign the application, as
WebSocket support is offered by Google HTTP(S) Load Balancing. Reviewing the encryption requirements is a good idea but it has nothing to do with
WebSockets.
Answer: D
Explanation:
https://cloud.google.com/monitoring/audit-logging
A. Create two configurations using gcloud config configurations create [NAME]. Run gcloud config configurations activate [NAME] to switch between accounts
when running the commands to start the Compute Engine instances.
B. Create two configurations using gcloud config configurations create [NAME]. Run gcloud configurations list to start the Compute Engine instances.
C. Activate two configurations using gcloud configurations activate [NAME]. Run gcloud config list to start the Compute Engine instances.
D. Activate two configurations using gcloud configurations activate [NAME]. Run gcloud configurations list to start the Compute Engine instances.
Answer: A
Explanation:
"Run gcloud configurations list to start the Compute Engine instances". How the heck are you expecting to "start" GCE instances doing "configuration list".
Each gcloud configuration has a 1 to 1 relationship with the region (if a region is defined). Since we have two different regions, we would need to create two
separate configurations using gcloud config configurations createRef: https://cloud.google.com/sdk/gcloud/reference/config/configurations/create
Secondly, you can activate each configuration independently by running gcloud config configurations activate [NAME]Ref:
https://cloud.google.com/sdk/gcloud/reference/config/configurations/activate
Finally, while each configuration is active, you can run the gcloud compute instances start [NAME] command to start the instance in the configurations
region.https://cloud.google.com/sdk/gcloud/reference/compute/instances/start
A. Create a cron job that runs on a scheduled basis to review stackdriver monitoring metrics, and then resize the Spanner instance accordingly.
B. Create a Stackdriver alerting policy to send an alert to oncall SRE emails when Cloud Spanner CPU exceeds the threshol
C. SREs would scale resources up or down accordingly.
D. Create a Stackdriver alerting policy to send an alert to Google Cloud Support email when Cloud Spanner CPU exceeds your threshol
E. Google support would scale resources up or down accordingly.
F. Create a Stackdriver alerting policy to send an alert to webhook when Cloud Spanner CPU is over or under your threshol
G. Create a Cloud Function that listens to HTTP and resizes Spanner resources accordingly.
Answer: D
Explanation:
As to mexblood1's point, CPU utilization is a recommended proxy for traffic when it comes to Cloud Spanner. See: Alerts for high CPU utilization The following
table specifies our recommendations for maximum CPU usage for both single-region and multi-region instances. These numbers are to ensure that your instance
has enough compute capacity to continue to serve your traffic in the event of the loss of an entire zone (for single-region instances) or an entire region (for multi-
region instances). - https://cloud.google.com/spanner/docs/cpu-utilization
Answer: A
A. Create a dataflow job that copies data from Cloud Bigtable and Cloud Storage for specific users.
B. Create a dataflow job that copies data from Cloud Bigtable and Cloud Spanner for specific users.
C. Create a Cloud Dataproc cluster that runs a Spark job to extract data from Cloud Bigtable and Cloud Storage for specific users.
D. Create two separate BigQuery external tables on Cloud Storage and Cloud Bigtabl
E. Use the BigQuery console to join these tables through user fields, and apply appropriate filters.
Answer: D
Explanation:
"The Cloud Spanner to Cloud Storage Text template is a batch pipeline that reads in data from a Cloud
Spanner table, optionally transforms the data via a JavaScript User Defined Function (UDF) that you provide, and writes it to Cloud Storage as CSV text files."
https://cloud.google.com/dataflow/docs/guides/templates/provided-batch#cloudspannertogcstext
"The Dataflow connector for Cloud Spanner lets you read data from and write data to Cloud Spanner in a Dataflow pipeline"
https://cloud.google.com/spanner/docs/dataflow-connector https://cloud.google.com/bigquery/external-data-sources
Answer: A
Explanation:
Ref: https://cloud.google.com/compute/docs/machine-types#n1_machine_type
A. Deployment Manager
B. Cloud Composer
C. Managed Instance Group
D. Unmanaged Instance Group
Answer: A
Explanation:
https://cloud.google.com/deployment-manager/docs/configuration/create-basic-configuration
A. Load data in Cloud Datastore and run a SQL query against it.
B. Create a BigQuery table and load data in BigQuer
C. Run a SQL query on this table and drop this table after you complete your request.
D. Create external tables in BigQuery that point to Cloud Storage buckets and run a SQL query on these external tables to complete your request.
E. Create a Hadoop cluster and copy the AVRO file to NDFS by compressing i
F. Load the file in a hive table and provide access to your analysts so that they can run SQL queries.
Answer: C
Explanation:
https://cloud.google.com/bigquery/external-data-sources
An external data source is a data source that you can query directly from BigQuery, even though the data is not stored in BigQuery storage.
BigQuery supports the following external data sources: Amazon S3
Azure Storage Cloud Bigtable Cloud Spanner Cloud SQL Cloud Storage
Drive
Answer:
A. Ask your ML team to add the “accelerator: gpu” annotation to their pod specification.
B. Recreate all the nodes of the GKE cluster to enable GPUs on all of them.
C. Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPU
D. Dedicate this cluster to your ML team.
E. Add a new, GPU-enabled, node pool to the GKE cluste
F. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.
Answer: D
Explanation:
This is the most optimal solution. Rather than recreating all nodes, you create a new node pool with GPU enabled. You then modify the pod specification to target
particular GPU types by adding node selector to your workloads Pod specification. YOu still have a single cluster so you pay Kubernetes cluster management fee
for just one cluster thus minimizing the
cost.Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/gpusRef: https://cloud.google.com/kubern
Example:
apiVersion: v1
kind: Pod
metadata:
name: my-gpu-pod
spec:
containers:
name: my-gpu-container
image: nvidia/cuda:10.0-runtime-ubuntu18.04
command: [/bin/bash]
resources:
limits:
nvidia.com/gpu: 2
nodeSelector:
cloud.google.com/gke-accelerator: nvidia-tesla-k80 # or nvidia-tesla-p100 or nvidia-tesla-p4 or nvidia-tesla-v100 or nvidia-tesla-t4
A. The pending Pod's resource requests are too large to fit on a single node of the cluster.
B. Too many Pods are already running in the cluster, and there are not enough resources left to schedule the pending Pod.
C. The node pool is configured with a service account that does not have permission to pull the container image used by the pending Pod.
D. The pending Pod was originally scheduled on a node that has been preempted between the creation of the Deployment and your verification of the Pods’ statu
E. It is currently being rescheduled on a new node.
Answer: B
Explanation:
The pending Pods resource requests are too large to fit on a single node of the cluster. Too many Pods are already running in the cluster, and there are not
enough resources left to schedule the pending Pod. is the right answer.
When you have a deployment with some pods in running and other pods in the pending state, more often than not it is a problem with resources on the nodes.
Heres a sample output of this use case. We see that the problem is with insufficient CPU on the Kubernetes nodes so we have to either enable auto-scaling or
manually scale up the nodes.
Answer: C
A. Select Cloud SQL (MySQL). Verify that the enable binary logging option is selected.
B. Select Cloud SQL (MySQL). Select the create failover replicas option.
C. Select Cloud Spanne
D. Set up your instance with 2 nodes.
E. Select Cloud Spanne
F. Set up your instance as multi-regional.
Answer: A
A. Use Binary Authorization and whitelist only the container images used by your customers’ Pods.
B. Use the Container Analysis API to detect vulnerabilities in the containers used by your customers’ Pods.
C. Create a GKE node pool with a sandbox type configured to gviso
D. Add the parameter runtimeClassName: gvisor to the specification of your customers’ Pods.
E. Use the cos_containerd image for your GKE node
F. Add a nodeSelector with the value cloud.google.com/gke-os-distribution: cos_containerd to the specification of your customers’ Pods.
Answer: C
A. Write a script that runs gsutil Is -| – gs://myapp-gcp-ace-logs/** to find and remove items older than 90 day
B. Schedule the script with cron.
C. Write a lifecycle management rule in JSON and push it to the bucket with gsutil lifecycle set config-json-file.
D. Write a lifecycle management rule in XML and push it to the bucket with gsutil lifecycle set config-xml-file.
E. Write a script that runs gsutil Is -Ir gs://myapp-gcp-ace-logs/** to find and remove items older than 90 day
F. Repeat this process every morning.
Answer: B
Explanation:
You write a lifecycle management rule in XML and push it to the bucket with gsutil lifecycle set config-xml-file. is not right.
gsutil lifecycle set enables you to set the lifecycle configuration on one or more buckets based on the configuration file provided. However, XML is not a valid
supported type for the configuration file.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/lifecycle
Write a script that runs gsutil ls -lr gs://myapp-gcp-ace-logs/** to find and remove items older than 90 days. Repeat this process every morning. is not right.
This manual approach is error-prone, time-consuming and expensive. GCP Cloud Storage provides lifecycle management rules that let you achieve this with
minimal effort.
Write a script that runs gsutil ls -l gs://myapp-gcp-ace-logs/** to find and remove items older than 90 days. Schedule the script with cron. is not right.
This manual approach is error-prone, time-consuming and expensive. GCP Cloud Storage provides lifecycle management rules that let you achieve this with
minimal effort.
Write a lifecycle management rule in JSON and push it to the bucket with gsutil lifecycle set config-json-file. is the right answer.
You can assign a lifecycle management configuration to a bucket. The configuration contains a set of rules which apply to current and future objects in the bucket.
When an object meets the criteria of one of the rules, Cloud Storage automatically performs a specified action on the object. One of the supported actions is to
Delete objects. You can set up a lifecycle management to delete objects older than 90 days. gsutil lifecycle set enables you to set the lifecycle configuration on the
bucket based on the configuration file. JSON is the only supported type for the configuration file. The config-json-file specified on the command line should be a
path to a local file containing the lifecycle configuration JSON document.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/lifecycle Ref: https://cloud.google.com/storage/docs/lifecycle
A. Configure Billing Data Export to BigQuery and visualize the data in Data Studio.
B. Visit the Cost Table page to get a CSV export and visualize it using Data Studio.
C. Fill all resources in the Pricing Calculator to get an estimate of the monthly cost.
D. Use the Reports view in the Cloud Billing Console to view the desired cost information.
Answer: A
Explanation:
https://cloud.google.com/billing/docs/how-to/export-data-bigquery "Cloud Billing export to BigQuery enables you to export detailed Google Cloud billing data (such
as usage, cost estimates, and pricing data) automatically throughout the day to a BigQuery dataset that you specify."
Answer: A
Explanation:
https://cloud.google.com/spanner/docs/getting-started/set-up
Answer: B
Answer: D
Answer: D
Explanation:
Google Cloud Coldline is a new cold-tier storage for archival data with access frequency of less than once per year. Unlike other cold storage options, Nearline has
no delays prior to data access, so now it is the leading solution among competitors.
The Real description is about Coldline storage Class: Coldline Storage
Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is a better choice than Standard
Storage or Nearline Storage in scenarios where slightly lower availability, a 90-day minimum storage duration, and higher costs for data access are acceptable
trade-offs for lowered at-rest storage costs.
Coldline Storage is ideal for data you plan to read or modify at most once a quarter. Note, however, that for data being kept entirely for backup or archiving
purposes, Archive Storage is more cost-effective, as it offers the lowest storage costs.
https://cloud.google.com/storage/docs/storage-classes#coldline
Answer: D
Explanation:
Cloud Functions is Google Cloud’s event-driven serverless compute platform that lets you run your code locally or in the cloud without having to provision servers.
Cloud Functions scales up or down, so you pay only for compute resources you use. Cloud Functions have excellent integration with Cloud Pub/Sub, lets you
scale down to zero and is recommended by Google as the ideal serverless platform to use when dependent on Cloud Pub/Sub."If you’re building a simple API (a
small set of functions to be accessed via HTTP or Cloud Pub/Sub), we recommend using Cloud Functions."Ref: https://cloud.google.com/serverless-options
Answer: B
Explanation:
https://console.cloud.google.com/marketplace/details/gc-launcher-for-mongodb-atlas/mongodb-atlas
Answer: A
Explanation:
https://cloud.google.com/resource-manager/docs/project-migration#oauth_consent_screen https://cloud.google.com/resource-manager/docs/project-migration
A. Create a Billing account, associate a payment method with it, and provide all project creators with permission to associate that billing account with their projects.
B. Grant all engineer’s permission to create their own billing accounts for each new project.
C. Apply for monthly invoiced billing, and have a single invoice tor the project paid by the finance team.
D. Create a billing account, associate it with a monthly purchase order (PO), and send the PO to Google Cloud.
Answer: A
A. * 1. Create a configuration for each project you need to manage.* 2. Activate the appropriate configuration when you work with each of your assigned GCP
projects.
B. * 1. Create a configuration for each project you need to manage.* 2. Use gcloud init to update the configuration values when you need to work with a non-default
project
C. * 1. Use the default configuration for one project you need to manage.* 2. Activate the appropriate configuration when you work with each of your assigned GCP
projects.
D. * 1. Use the default configuration for one project you need to manage.* 2. Use gcloud init to update the configuration values when you need to work with a non-
default project.
Answer: A
Explanation:
https://cloud.google.com/sdk/gcloud https://cloud.google.com/sdk/docs/configurations#multiple_configurations
A. Create a Service Account in your own project, and grant this Service Account access to BigGuery in your project
B. Create a Service Account in your own project, and ask the partner to grant this Service Account access to BigQuery in their project
C. Ask the partner to create a Service Account in their project, and have them give the Service Account access to BigQuery in their project
D. Ask the partner to create a Service Account in their project, and grant their Service Account access to the BigQuery dataset in your project
Answer: D
Explanation:
https://gtseres.medium.com/using-service-accounts-across-projects-in-gcp-cf9473fef8f0#:~:text=Go%20to%20t
You are deploying an application to a Compute Engine VM in a managed instance group. The application must be running at all times, but only a single instance of
the VM should run per GCP project. How should you configure the instance group?
A. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 1.
B. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 1.
C. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 2.
D. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 2.
Answer: A
Explanation:
https://cloud.google.com/compute/docs/autoscaler#specifications
Autoscaling works independently from autohealing. If you configure autohealing for your group and an instance fails the health check, the autohealer attempts to
recreate the instance. Recreating an instance can cause the number of instances in the group to fall below the autoscaling threshold (minNumReplicas) that you
specify.
Since we need the application running at all times, we need a minimum 1 instance.
Only a single instance of the VM should run, we need a maximum 1 instance.
We want the application running at all times. If the VM crashes due to any underlying hardware failure, we want another instance to be added to MIG so that
application can continue to serve requests. We can achieve this by enabling autoscaling. The only option that satisfies these three is Set autoscaling to On, set the
minimum number of instances to 1, and then set the maximum number of instances to 1.
Ref: https://cloud.google.com/compute/docs/autoscaler
A. Open the Google Cloud console, and run a query to determine which resources this service account can access.
B. Open the Google Cloud console, and run a query of the audit logs to find permission denied errors for this service account.
C. Open the Google Cloud console, and check the organization policies.
D. Open the Google Cloud console, and check the Identity and Access Management (IAM) roles assigned to the service account at the project or inherited from
the folder or organization levels.
Answer: D
Explanation:
This answer is the most effective way to validate whether the service account used by the CI/CD server has the appropriate roles in the specific project. By
checking the IAM roles assigned to the service account, you can see which permissions the service account has and which resources it can access. You can also
check if the service account inherits any roles from the folder or organization levels, which may affect its access to the project. You can use the Google Cloud
console, the gcloud command-line tool, or the IAM API to view the IAM roles of a service account.
Answer: C
A. Use the gsutil to rewrite the storage class for the bucket Change the default storage class for the bucket
B. Use the gsutil to rewrite the storage class for the bucket Set up Object Lifecycle management on the bucket
C. Create a new bucket and change the default storage class for the bucket Set up Object Lifecycle management on lite bucket
D. Create a new bucket and change the default storage class for the bucket import the files from the previous bucket into the new bucket
Answer: B
A. Create a single budget for all projects and configure budget alerts on this budget.
B. Create a separate billing account per sandbox project and enable BigQuery billing export
C. Create a Data Studio dashboard to plot the spending per billing account.
D. Create a budget per project and configure budget alerts on all of these budgets.
E. Create a single billing account for all sandbox projects and enable BigQuery billing export
F. Create a Data Studio dashboard to plot the spending per project.
Answer: C
Explanation:
Set budgets and budget alerts Overview Avoid surprises on your bill by creating Cloud Billing budgets to monitor all of your Google Cloud charges in one place. A
budget enables you to track your actual Google Cloud spend against your planned spend. After you've set a budget amount, you set budget alert threshold rules
that are used to trigger email notifications. Budget alert emails help you stay informed about how your spend is tracking against your budget. 2. Set budget scope
Set the budget Scope and then click Next. In the Projects field, select one or more projects that you want to apply the budget alert to. To apply the budget alert to
all the projects in the Cloud Billing account, choose Select all.
https://cloud.google.com/billing/docs/how-to/budgets#budget-scop
A. Ask the auditor for their Google account, and give them the Viewer role on the project.
B. Ask the auditor for their Google account, and give them the Security Reviewer role on the project.
C. Create a temporary account for the auditor in Cloud Identity, and give that account the Viewer role on the project.
D. Create a temporary account for the auditor in Cloud Identity, and give that account the Security Reviewer role on the project.
Answer: C
Explanation:
Using primitive roles The following table lists the primitive roles that you can grant to access a project, the description of what the role does, and the permissions
bundled within that role. Avoid using primitive roles except when absolutely necessary. These roles are very powerful, and include a large number of permissions
across all Google Cloud services. For more details on when you should use primitive roles, see the Identity and Access Management FAQ. IAM predefined roles
are much more granular, and allow you to carefully manage the set of permissions that your users have access to. See Understanding Roles for a list of roles that
can be granted at the project level. Creating custom roles can further increase the control you have over user permissions. https://cloud.google.com/resource-
manager/docs/access-control-proj#using_primitive_roles
https://cloud.google.com/iam/docs/understanding-custom-roles
Answer: A
Answer: D
Answer: C
Explanation:
We need to apply the GCP Best practices. roles/browser Browser Read access to browse the hierarchy for a project, including the folder, organization, and IAM
policy. This role doesn't include permission to view resources in the project. https://cloud.google.com/iam/docs/understanding-roles
A. * 1. Verify that you ace assigned the Billing Administrator IAM role tor your organization's Google Cloud Project for the Marketing department* 2. Link the new
project to a Marketing Billing Account
B. * 1. Verify that you are assigned the Billing Administrator IAM role for your organization's Google Cloud account* 2. Create a new Google Cloud Project for the
Marketing department* 3. Set the default key-value project labels to department marketing for all services in this project
C. * 1. Verify that you are assigned the Organization Administrator IAM role for your organization's Google Cloud account* 2. Create a new Google Cloud Project
for the Marketing department 3. Link the new project to a Marketing Billing Account.
D. * 1. Verity that you are assigned the Organization Administrator IAM role for your organization's Google Cloud account* 2. Create a new Google Cloud Project
for the Marketing department* 3. Set the default key value project labels to department marketing for all services in this protect
Answer: A
Answer: D
Explanation:
The reason is that when you do health check, you want the VM to be working. Do the first check after initial setup time of 3 mins = 180 s < 200 s is reasonable.
The reason why our autoscaling is adding more instances than needed is that it checks 30 seconds after launching the instance and at this point, the instance
isnt up and isnt ready to serve traffic. So our autoscaling policy starts another instance again checks this after 30 seconds and the cycle repeats until it gets to the
maximum instances or the instances launched earlier are healthy and start processing
traffic which happens after 180 seconds (3 minutes). This can be easily rectified by adjusting the initial delay to be higher than the time it takes for the instance to
become available for processing traffic.So setting this to 200 ensures that it waits until the instance is up (around 180-second mark) and then starts forwarding
traffic to this instance. Even after a cool out period, if the CPU utilization is still high, the autoscaler can again scale up but this scale-up is genuine and is based on
the actual load.
Initial Delay Seconds This setting delays autohealing from potentially prematurely recreating the instance if the instance is in the process of starting up. The initial
delay timer starts when the currentAction of the
instance is
VERIFYING.Ref: https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs
A. Create a folder to contain all the dev projects Create an organization policy to limit resources in US locations.
B. Create an organization to contain all the dev project
C. Create an Identity and Access Management (IAM) policy to limit the resources in US regions.
D. Create an Identity and Access Management <IAM) policy to restrict the resources locations in the U
E. Apply the policy to all dev projects.
F. Create an Identity and Access Management (IAM)policy to restrict the resources locations in all dev project
G. Apply the policy to all dev roles.
Answer: C
Visit Our Site to Purchase the Full Set of Actual Associate-Cloud-Engineer Exam Questions With Answers.
We Also Provide Practice Exam Software That Simulates Real Exam Environment And Has Many Self-Assessment Features. Order the
Associate-Cloud-Engineer Product From:
https://www.2passeasy.com/dumps/Associate-Cloud-Engineer/
* Associate-Cloud-Engineer Most Realistic Questions that Guarantee you a Pass on Your FirstTry
* Associate-Cloud-Engineer Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year