0% found this document useful (0 votes)
16 views23 pages

ACE Exam 2

The document outlines best practices for managing sensitive customer information, enabling Compute API, viewing logs in App Engine, and setting up a multi-node database on GKE. It emphasizes the importance of security, proper logging structures, and efficient project management in Google Cloud. Additionally, it discusses the use of Persistent Disks with Compute Engine and Kubernetes Engine, and the sequence of operations during instance startup.

Uploaded by

simonchembeu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views23 pages

ACE Exam 2

The document outlines best practices for managing sensitive customer information, enabling Compute API, viewing logs in App Engine, and setting up a multi-node database on GKE. It emphasizes the importance of security, proper logging structures, and efficient project management in Google Cloud. Additionally, it discusses the use of Persistent Disks with Compute Engine and Kubernetes Engine, and the sequence of operations during instance startup.

Uploaded by

simonchembeu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

You are designing the object security structure for sensitive customer information.

Which of the following should you be sure to include in your planning?


A. Do not grant any bucket-level permissions, so that new objects are secure by default.
B. Put each customer’s objects in a separate bucket, to limit attack surface area.
C. None of the other options is appropriate.
D. Give write access and read access to different people, to ensure separation of duties.
E. Assign all employees to a single full-access group, to keep security simple.

EXPLANATION:
Separation of duties is not about who can read and who can write data. Security should be
simple, yes, but it needs to actually be _secure_, first. You should generally not design your
system to need so many buckets, and you _can_ properly secure the data with object-level
ACLs. It can be a good strategy to not allow any bucket-level access and force access to be
granted explicitly at the object level.

RESOURCES
● OWASP Security by Design Principles

You need to start a set of virtual machines to run year-end processing in a new GCP
project. How can you enable the Compute API in the fewest number of steps?
A. Open Cloud Shell, run `gcloud services enable compute`
B. Navigate to the Compute section of the console.
C. Open Cloud Shell, configure authentication, run `gcloud services enable
compute.googleapis.com`
D. Do nothing. It is enabled by default.
E. Open Cloud Shell, configure authentication, select the “defaults” project, run `gcloud
enable compute service`

EXPLANATION:
There is no such thing as a “defaults” project. Each API must be enabled before it can be used.
Some APIs are enabled by default, but GCE is not. Navigating to the Compute Engine of the
console automatically enables the GCE API. You do not have to configure authentication to be
able to use Cloud Shell, but regardless, using Cloud Shell would take more steps than simply
navigating to the GCE console.

You need to view both request and application logs for your Python-based App Engine
app. Which of the following options would be best?
A. None of the other options is appropriate.
B. Use the built-in support to view request logs in the App Engine console and install the
Stackdriver agent to get app logs to Stackdriver.
C. Install the Stackdriver agent to get request logs to Stackdriver; use the Stackdriver
Logging API to send app logs directly to Stackdriver.
D. Use the built-in support to get both request and app logs to Stackdriver.

EXPLANATION:
Google App Engine natively connects to Stackdriver and sends both request logs and any
application logs you give it (via the GAE SDK).

RESOURCES
● Logging in App Engine
● Logging from Python in App Engine

You are thinking through all the things that happen when a Compute Engine instance
starts up with a startup script that installs the Stackdriver agent and runs gsutil to
retrieve a large amount of data from Cloud Storage. Of the following steps, which is the
first one to happen?
A. Data retrieval from GCS completes
B. The metadata service returns information about this instance to the first requestor
C. The instance startup script begins
D. Stackdriver Logging shows the first log lines from the startup script

EXPLANATION:
Immediately when the VM is powered on and the OS starts booting up, the instance is
considered to be Running. That's when gcloud completes, if it was run without `--async`. Then
the metadata service will provide the startup script to the OS boot process. The gsutil command
will also need to get metadata--like the service account token--but since it is synchronous by
default and will take some time to transfer the volume of data to the instance, the Stackdriver
agent should have a chance to push logs and show the startup script progress. When the
transfer is done, the startup script will complete and more logs will eventually be pushed to
Stackdriver Logging.

RESOURCES
● Checking Instance Status
● gcloud compute instances create
● Storing and Retrieving Instance Metadata
● Startup Scripts in GCE

You are planning to run a multi-node database on GKE. Which of the following things do
you need to consider?
A. You should use cross-region container replication
B. GKE handles disk replication across pods
C. You should use a StatefulSet object
D. You should use PodReplicationState objects
E. At least one DB pod must always be running for data to stay persisted
EXPLANATION:
There is no such thing as a PodReplicationState object, in Kubernetes. Data will be persisted in
Persistent Volumes even if all DB pods have failed or been shut down. Kubernetes StatefulSet
objects exist to manage applications that _do_ want to preserve state--unlike the normal
applications that should be stateless

RESOURCES
● Deploying a Stateful Application on GKE
● Why StatefulSets and not just persistent volumes?

You go to the Activity Log to look at the “Create VM” event for a GCE instance you just
created. You set the Resource Type to “GCE VM Instance”. Which of the following will
display the “Create VM” event you wish to see?
A. Set the “Activity Types” dropdown to “Development”
B. Set the “Activity Types” dropdown to “Data Access”
C. Set the “Activity Types” dropdown to “Monitoring”
D. Set the “Activity Types” dropdown to “Configuration”

EXPLANATION:
You must become very familiar with the Activity Log. In this case, “Create VM” is considered to
be a “Configuration” activity.

RESOURCES
● Activity Log in GCP Console
● Audit Logging in GCP
● Audit Logging in GCE

You navigate to the Activity Log for a project containing a GKE cluster you created. If
you filter the Resource Type to “GCE VM Instance”, which of the following will you see?
A. You will not see any lines because the instances are owned by GKE.
B. You will see lines of the form “DEFAULT_GCE_SERVICE_ACCOUNT created
GKE_NODE_INSTANCE_NAME”
C. None of the other options is correct.
D. You will see lines of the form “YOUR_EMAIL created GKE_NODE_INSTANCE_NAME”

EXPLANATION:
Log lines for GKE node creation will show up in the activity log. But the creation is not attached
to your account--you only created the GKE cluster. Neither is it the GCE default service account
that creates such instances--that account is meant to be used by applications running _on_
GCE instances, not GKE management like this. Instead, log lines will use the passive voice
“GKE_NODE_INSTANCE_NAME was created” to indicate that this was an automatic action
taken by GCP because you had previously configured/requested it do that.

RESOURCES
● Activity Log in GCP Console
● Audit Logging in GCP
● Audit Logging in GCE

You already installed and configured `gcloud` for use on your work computer (not Cloud
Shell). What do you need to so you can also use `gsutil` and `bq`?
A. Run `gcloud config export gsutil` and `gcloud config export bq`.
B. Run `gsutil config import gcloud` and `bq config import gcloud`.
C. Run `gcloud config export storage` and `gcloud config export query`.
D. Nothing
E. Configure those tools independently.

EXPLANATION:
These tools all share their configuration, which is managed by gcloud.

RESOURCES
● Obtain credentials and create configuration file with gsutil
● Initializing the Cloud SDK
● Using the bq Command-Line Tool

You are designing the logging structure for a non-containerized Java application that will
run on GAE. Which of the following options is recommended and will use the least
number of steps to enable your developers to later access and search logs?
A. Have the developers write log lines to stdout and stderr
B. Have the developers write log lines to a file named stackdriver.log, install and run the
Stackdriver agent beside the application
C. Have the developers write log lines to a file named stackdriver.log
D. Have the developers write logs using the App Engine Java SDK
E. Have the developers write log lines to a file named application.log, install the Stackdriver
agent on the VMs, configure the Stackdriver agent to monitor and push application.log
F. Have the developers write log lines to stdout and stderr, install and run the Stackdriver
agent beside the application

EXPLANATION:
In App Engine Standard, you should log using the App Engine SDK and the connection to
Stackdriver (i.e. agent installation and configuration) is handled automatically for you.
RESOURCES
● Logging in App Engine

Which of the following are Google-recommended practices for creating new projects?
(Choose 3)
A. Create a project for each environment for your system--such as Dev, QA, and Prod.
B. Create a new project each time you deploy your system.
C. Create a project for each user of your system.
D. Create separate projects for systems owned by different departments in your
organization.
E. Because quotas are shared across all projects, it doesn't matter how many you make.
F. New projects should only be created when your organization can handle at least one
hour of downtime.
G. Add more systems into a project until you hit a quota limit, then make a new one.
H. Use projects to limit blast radius.

EXPLANATION:
Creating new projects does not involve any downtime. Projects can be shared between all
persons working with them; they do not have to be individual, and usually aren't. The system(s)
in one project normally get deployed multiple times and serve many users. It's a good idea to
use projects to separate different systems and environments from each other, partly for
organization and partly to prevent them from interacting badly with each other.

Which of the following roles has the highest level of access?


A. Project Owner
B. Organization Auditor
C. Controller
D. Organization Superuser
E. Project Editor
F. Compute Administrator

EXPLANATION:
There are no such roles as Organization Superuser, Organization Auditor, nor Controller. The
Project Owner has all of the capabilities of the other two (Project Editor and Compute
Administrator), and more. (There is, however, a “Super Admin” role for an organization that can
control everything.)
RESOURCES
● IAM Overview
● Understanding Roles
● Best Practices for Organizations - Domain Admin Roles
You are planning a log analysis system to be deployed on GCP. Which of the following
would be the best way to ingest the logs?
A. BigTable
B. Stackdriver Logging
C. Cloud Pub/Sub
D. Cloud Storage
E. Activity Log

EXPLANATION:
Stackdriver Logging is perfect for accepting many logs, and is a better choice than Cloud
Pub/Sub for the initial ingestion. It can then send logs to Cloud Storage for archiving and/or
send them to Cloud Pub/Sub for streaming to something like Cloud Dataflow.

RESOURCES
● (Stackdriver) Cloud Logging
● Sample Architecture for Log Processing

You currently have 850TB of Closed-Circuit Television (CCTV) capture data and are
adding new data at a rate of 80TB/month. The rate of data captured and needing to be
stored is expected to grow to 200TB/month within one year because new locations are
being added, each with 4-10 cameras. Which of the following storage options best suits
this purpose without encountering storage or throughput limits?
A. One Cloud Storage bucket per month, for all locations
B. One Cloud Storage bucket for all objects
C. One Cloud Storage bucket per year, per location
D. One Cloud Storage bucket per week
E. One Cloud Storage bucket per CCTV camera

EXPLANATION:
This question might make you think you need to do some math to calculate rates and compare
to limits, but you don’t. You don’t need to split your data up to avoid bucket-level limits. It is
generally easiest (and best) to manage all your data in a single bucket and using things like
folders for organizing them. In fact, if you separate data into many buckets, you are more likely
to encounter limits around bucket creation and deletion.

RESOURCES
● GCS Quotas

You are responsible for securely managing employee access to Google Cloud. Which of
the following are Google-recommended practices for this? (Choose 2)
A. Set up all employee accounts to use the corporate security office phone number for
account rescue.
B. Use Cloud Identity or GSuite to manage Google accounts for employees.
C. Enforce MFA on employee accounts.
D. Have each employee set up a GMail account using two-factor authentication.
E. Use Google Cloud Directory Sync to push Google account changes to corporate head
office via LDAP.

EXPLANATION:
MFA stands for Multi-Factor Authentication, and it is a best practice to use this to secure
accounts. Cloud Identity and GSuite are the two ways to centrally manage Google accounts.
Google Cloud Directory Sync (GCDS) does use LDAP to connect to your organization’s
directory server, but it only pulls data to synchronize and never pushes changes.

RESOURCES
● Best Practices for Enterprise Organizations (Must Read in Full!)

You already have a GCP project but want another one for a new developer who has
started working for your company. How can you create a new project?
A. Turn on Gold level support on an existing project, phone support to create a new project.
B. In the GCP mobile app, navigate to the support section and press “Create new project”.
C. Configure GCS for your local machine using QUIK bindings and press its “New Project”
button.
D. Enable Silver support on your billing account, email support to create a new project.
E. You cannot create a new project.
F. In the console, press on the current project name, then press on “Create New”.

EXPLANATION:
You can create new projects, up to your quota. Support does not create projects for you; that's
something you do, yourself. “QUIK bindings” are just something made up.

You are designing the logging structure for a containerized Java application that will run
on GAE Flex. Which of the following options is recommended and will use the least
number of steps to enable your developers to later access and search logs?
A. Have the developers write log lines to a file named stackdriver.log, install and run the
Stackdriver agent beside the application
B. Have the developers write log lines to a file named stackdriver.log
C. Have the developers write log lines to stdout and stderr
D. Have the developers write log lines to a file named application.log, install the Stackdriver
agent on the VMs, configure the Stackdriver agent to monitor and push application.log
E. Have the developers write log lines to stdout and stderr, install and run the Stackdriver
agent beside the application
F. Have the developers write logs using the App Engine Java SDK
EXPLANATION:
In App Engine Flex the connection to Stackdriver (i.e. agent installation and configuration) is
handled automatically for you. In GAE Flex, you _could_ write logs using the App Engine SDK--
and that would work--but it’s best practice for containers to log to stdout and stderr, instead:
“Containers offer an easy and standardized way to handle logs because you can write them to
stdout and stderr. Docker captures these log lines and allows you to access them by using the
docker logs command. As an application developer, you don't need to implement advanced
logging mechanisms. Use the native logging mechanisms instead.”

RESOURCES
● Logging in App Engine
● Best Practices for Operating Containers

You are planning to use Persistent Disks in your system. In the context of what other
GCP service(s) will you be using these Persistent Disks? (Choose 2)
A. BigTable
B. Cloud Storage
C. You can only use Persistent Disks with one of the other listed options
D. Compute Engine
E. Kubernetes Engine

EXPLANATION:
Persistent Disks attach to GCE instances, but they can also be used through GKE. Cloud
Storage and BigTable are completely separate types of storage.

RESOURCES
● Persistent Disks

You are thinking through all the things that happen when a Compute Engine instance
starts up with a startup script that installs the Stackdriver agent and runs gsutil to
retrieve a large amount of data from Cloud Storage. Of the following steps, which is the
last one to happen?
A. Space is reserved on a host machine
B. The gcloud command to start the instance completes
C. The instance startup script completes
D. The instance goes into the Running state

EXPLANATION:
After a request to create a new instance has been accepted and while space is being found on
some host machine, that instance starts in the Provisioning state. After space has been found
and reserved on a host machine, the instance state goes to Staging while the host prepares to
run it and sorts out things like the network adapter that will be used. Immediately when the VM
is powered on and the OS starts booting up, the instance is considered to be Running. That's
when gcloud completes, if it was run without `--async`.

RESOURCES
● Checking Instance Status
● gcloud compute instances create

You have previously installed the Google Cloud SDK on your work laptop and configured
it. You now run the command `gcloud compute instances create newvm` but it does not
prompt you to specify a zone. Which of the following could explain this? (Choose 2)
A. The project configured for gcloud is located in a particular zone.
B. Your gcloud configuration includes a value for compute/region
C. Your gcloud configuration includes a value for compute/zone
D. In Cloud Shell, you previously set a zone as the default one GCE should use.
E. Only one of the other options is correct.

EXPLANATION:
Projects are global and and are not “located” in any region or zone. The gcloud family of tools
save their default zone information locally where they’re installed, and these are separate from
console settings. The gcloud tool _can_ pull the values set in the console if you rerun `gcloud
init`, but gcloud does not push its configuration to the place the console uses.

RESOURCES
● Setting a Default Region and Zone for GCE

What will happen if a running GKE Deployment encounters a fatal error?


A. GKE Deployments are configuration information and do not directly encounter fatal
errors.
B. None of the other options is correct.
C. GKE will automatically restart that deployment on an available node.
D. You can tell GKE to restart the deployment in an available pod.
E. GKE will automatically restart the deployment in an available pod.
F. GKE will automatically restart that deployment on an available GCE host.

EXPLANATION:
GKE Deployments are a declaration of what you want. Functionally, a Deployment uses
ReplicaSets to make sure that the right configuration and number of pods are deployed to the
cluster.
RESOURCES
● GKE Deployments

What will happen if a running GKE pod encounters a fatal error?


A. GKE pods are tiered and cannot encounter fatal errors.
B. If it is a part of a node, GKE will automatically restart that pod on an available GCE host.
C. If it is a part of a deployment, GKE will automatically restart that pod on an available
node.
D. You can tell GKE to restart the pod in an available deployment.
E. If it is a part of a host, GKE will automatically restart the pod in an available deployment.

EXPLANATION:
GKE tries to ensure that the number of pods you’ve specified in your deployment are always
running, so it will restart one if it fails. All the other options are using terms in ways that don’t
make sense (such as “an available deployment”). From the documentation: `Pods do not 'heal'
or repair themselves. For example, if a Pod is scheduled on a node which later fails, the Pod is
deleted. Similarly, if a Pod is evicted from a node for any reason, the Pod does not replace
itself.`

RESOURCES
● GKE Pods

You are thinking through all the things that happen when a Compute Engine instance
starts up with a startup script that installs the Stackdriver agent and runs gsutil to
retrieve a large amount of data from Cloud Storage. Of the following steps, which is the
last one to happen?
A. The metadata service returns information about this instance to the first requestor
B. The instance startup script begins
C. Stackdriver Logging shows the first log lines from the startup script
D. Data retrieval from GCS completes

EXPLANATION:
Immediately when the VM is powered on and the OS starts booting up, the instance is
considered to be Running. That's when gcloud completes, if it was run without `--async`. Then
the metadata service will provide the startup script to the OS boot process. The gsutil command
will also need to get metadata--like the service account token--but since it is synchronous by
default and will take some time to transfer the volume of data to the instance, the Stackdriver
agent should have a chance to push logs and show the startup script progress. When the
transfer is done, the startup script will complete and more logs will eventually be pushed to
Stackdriver Logging.
RESOURCES
● Checking Instance Status
● gcloud compute instances create
● Storing and Retrieving Instance Metadata
● Startup Scripts in GCE

How should you enable a GCE instance to read files from a bucket in the same project?
(Choose 2)
A. Do not change the default service account setup and attachment
B. Grant bucket read access to the default compute service account
C. Log into Cloud Shell and run `gcloud services enable storage.googleapis.com`
D. When launching the instance, remove the default service account so it falls back to
project-level access
E. Only one of the other options is correct
F. Log onto the instance and run `gcloud services enable storage.googleapis.com`

EXPLANATION:
By default, both the default service account and te default scopes can read from GCS buckets
in the same project, so you should just leave those alone and it will work.
RESOURCES
● Granting Roles to Service Accounts

You have two web applications that you want to deploy in GCP--one written in Ruby and
the other written in Rust. Which of the following GCP services would be capable of
handling these apps?
A. Web Engine Ex
B. App Engine Flexible
C. Cloud Dataflow
D. Web Engine
E. App Engine Standard

EXPLANATION:
There is no GCP service called Web Engine or Web Engine Ex. App Engine Standard supports
apps written in Java, Python, and Go. Cloud Dataflow and Cloud Dataproc are services for
processing large volumes of data, not for hosting web apps. Ruby and Rust applications could
both be run in containers on App Engine Flexible.

RESOURCES
● App Engine FAQ
● Cloud Dataflow
● Cloud Dataproc
A co-worker tried to access the `myfile` file that you have stored in the `mybucket` GCS
bucket, but they were denied access? Which of the following represents the best way to
allow them to view it?
A. In the GCP console, go to the Activity screen, find the “File Access Denied” line, and
press the “Add Exception” button.
B. In the GCP console, go to the “IAM & Admin” section, switch to the “Roles” tab, and add
the co-worker under “Editor”.
C. SSH to a GCE instance and type `gcloud storage allow-access coworker@email.domain
gs://mybucket/myfile`
D. In Cloud Shell, type `gsutil acl ch -u coworker@email.domain:r gs://mybucket/myfile`

EXPLANATION:
There is no “Add Exception” button on the Activity screen. Neither will `gcloud storage allow-
access` work. You could add the co-worker as a project editor, but that is way more privilege
than they need to view one file.

You run the command `kubectl describe pod mypodname` in Cloud Shell. What should
you expect to see?
A. An authentication failure
B. An “unknown command” error
C. An authorization failure
D. A configuration error
E. Information about the named pod

EXPLANATION:
This is a valid command and Cloud Shell will automatically configure kubectl with the required
authentication information to allow you to interact with the GKE cluster through it.

RESOURCES
● Cheat Sheet for kubectl
● Viewing Pods and Nodes in Kubernetes

You need to store some recently recorded customer focus sessions into a new GCP
project. How can you enable the GCS API in the fewest number of steps?
A. Navigate to the Storage section of the console.
B. Open Cloud Shell, configure authentication, select the “defaults” project, run `gcloud
enable storage service`
C. Do nothing. It is enabled by default.
D. Open Cloud Shell, configure authentication, run `gcloud services enable
storage.googleapis.com`
E. Open Cloud Shell, run `gcloud services enable storage`

EXPLANATION:
There is no such thing as a “defaults” project. Each API must be enabled before it can be used.
Some APIs are enabled by default, and that includes GCS. You do not have to configure
authentication to be able to use Cloud Shell, but regardless, using Cloud Shell would take more
steps than doing nothing. :-)

You need to store thousands of 2TB objects for one month and it is very unlikely that you
will need to retrieve any of them. Which of the following options would be the most cost-
effective?
A. Nearline Cloud Storage bucket
B. Coldline Cloud Storage bucket
C. Regional Cloud Storage bucket
D. Bigtable
E. Multi-Regional Cloud Storage bucket

EXPLANATION:
Bigtable is not made for storing large objects. Coldline’s minimum storage duration of 90 days
makes it more expensive than Nearline. Multi-Regional and Regional are both more expensive
than Nearline.

RESOURCES
● GCS Storage Classes
● GCS Pricing
● Bigtable

You need to very quickly set up Wordpress on GCP. Which of the following are the
fastest options to get up and running? (Choose 2)
A. Cloud Functions
B. Cloud Press
C. Only one of the other options would work
D. Cloud Launcher
E. GCP Marketplace
F. Compute Engine

EXPLANATION:
There is no such GCP service as “Cloud Press”. Wordpress is not designed to run on Google
Cloud Functions. The Cloud Launcher was renamed to be the GCP Marketplace--so these refer
to the same thing--and this is a quick way to deploy all sorts of different things in GCP.
RESOURCES
● Cloud Launcher becomes GCP Marketplace
● GCP Marketplace Overview
● Wordpress on GCP
● Wordpress on GCP Marketplace
● GCP Marketplace Docs

You are planning out your organization’s usage of GCP. Which of the following is a
Google-recommended practice?
A. GCS ACLs should always be set by a Service Account.
B. None of the other options is correct.
C. Auditor access should be granted through a Service Account.
D. GCS ACLs should always be set to a Service Account.
E. The project owner should generally be a Service Account.

EXPLANATION:
Service accounts are meant to be used by programs and they are one--but not the only!--way to
manage access to resources.

RESOURCES
● Understanding Service Accounts

You are working together with a contractor from the Acme company and you need to
allow App Engine running in one of Acme’s GCP projects to write to a Cloud Pub/Sub
topic you own. Which of the following pieces of information are enough to let you enable
that access? (Choose 2)
A. The email address of the Acme contractor
B. The Acme GCP project’s name
C. The Acme GCP project’s project number
D. The Acme GCP project’s project ID
E. The email address of the Acme project service account

EXPLANATION:
You need to grant access to the service account being used by Acme’s App Engine app, not the
contractor, so you don’t care about the contractor’s email address. If you are given the service
account email address, you’re done; that’s enough. If you need to use the pattern to construct
the email address, you’ll need to know the Project ID (not its number, unlike for GCE!) to
construct the email address used by the default App Engine service account:
`PROJECT_ID@appspot.gserviceaccount.com` .

RESOURCES
● Service Accounts
● Understanding Service Accounts
● Granting Roles to Service Accounts

In Cloud Shell, you run the command `gcloud compute instances list`, and the response
that you see is `HTTPError 403: Access Not Configured.`. What is a likely explanation for
this error message?
A. The GCE API has not yet been enabled for this account.
B. This Cloud shell instance does not have read access to any of the currently running
instances.
C. The startup script for this Cloud Shell instance has not yet finished running.
D. The GCE API has not yet been enabled for this Cloud Shell instance.
E. The GCE API has not yet been enabled for this project.
F. Your user account does not have read access to any of the currently running instances.

EXPLANATION:
APIs must be enabled at the project level, and 403 can indicate that that has not yet been done.

You have a GKE cluster that has fluctuating load over the course of each day and you
would like to reduce costs. What should you do?
A. In the GKE console, edit the cluster and enable cluster autoscaling.
B. In the GCE console, add the nodes to an unmanaged instance group.
C. In the GCE console, add the nodes to a managed instance group.
D. Run `gcloud container clusters resize mycluster --size=auto` .
E. Write a script to recreate the cluster as demand changes.

EXPLANATION:
Clusters are editable, not immutable, and should not be recreated because of changes in
demand. You cannot manage GKE nodes with your own instance groups--and you can’t migrate
nodes into a managed instance group, anyway. You cannot enable cluster autoscaling with the
`resize` command, but you can turn that option on in the console or using the command `gcloud
container clusters update CLUSTER_NAME --enable-autoscaling`.

RESOURCES
● GKE Cluster Architecture
● Resizing a GKE Cluster
● Resizing a GKE Cluster gcloud Reference

You need to store some structured data and query and continually update it with SQL
from your web app backend. The data volume and query load are reasonably consistent
and you would like to reduce ongoing maintenance and management costs. Which
option would best serve these requirements?
A. MySQL on GCE
B. BigQuery
C. Cloud Bigtable
D. Cloud SQL
E. None of the other options is appropriate
F. Cloud Storage

EXPLANATION:
Cloud Storage is for unstructured data and does not support SQL. BigQuery is made for mostly-
static analytics situations--not continually updated data as indicated in the scenario--and a web
app backend may need lower latency than BigQuery offers. Bigtable is made for low-latency
analytics situations. Managing your own MySQL installation on GCE would be a lot more work
than using Cloud SQL. Cloud SQL is a good fit for the described situation.

RESOURCES
● A GCP Flowchart A Day (Very Valuable!)

You are planning to use BigTable for your system on GCP. Which of the following
statements is true about using the pricing calculator for this situation?
A. You need to estimate query volume for the BigTable autoscaling estimation.
B. You need to enter the number of BigTable nodes you’ll provision.
C. You need to estimate how much GCS data will be backing the BigTable.
D. None of the other options is correct.

EXPLANATION:
BigTable is priced by provisioned nodes. BigTable does not autoscale. BigTable does not store
its data in GCS.

RESOURCES
● GCP Pricing Calculator
● Bigtable
● Bigtable Instances, Clusters, and Nodes

You have a GKE cluster that currently has six nodes but has lots of idle capacity. What
should you do?
A. Run `gcloud container clusters resize mycluster --size=5` .
B. Nothing. GKE is always fully managed and will scale down by default.
C. In the GCE console, delete one of the nodes.
D. Clusters are immutable so simply create a new cluster for the smaller workload.
E. In the GCE console, terminate one of the nodes.

EXPLANATION:
Clusters are editable, not immutable, and should not be recreated because of changes in
demand. Cluster autoscaling is an optional setting. You do not manage nodes via GCE,
directly--you always manage them through GKE, even though you can see them via GCE.

RESOURCES
● GKE Cluster Architecture
● Resizing a GKE Cluster
● Resizing a GKE Cluster gcloud Reference

You are planning to use GPUs for your system on GCP. Which of the following
statements is true about using the pricing calculator for this situation?
A. None of the other options is correct.
B. GPUs are always entered on the GPU tab.
C. GPUs can be entered on both the GCE and GKE tabs.
D. GPUs are always entered on the GCE tab.
E. GPUs can be entered on any of the GCE, GKE, and GAE tabs.

EXPLANATION:
The pricing calculator does not have a GPU tab. App Engine doesn’t support GPUs. GPUs can
be entered on the GKE tab. GPUs can be entered on the GCE tab.

RESOURCES
● GCP Pricing Calculator

You have a GKE cluster that currently has six nodes but will soon run out of capacity.
What should you do?
A. Nothing. GKE is always fully managed and will scale up by default.
B. Run `gcloud compute instances create anyname --gke`
C. In the GKE console, edit the cluster and specify the new desired size.
D. Run `gcloud compute instances create gke-7`
E. Clusters are immutable so simply create a new cluster for the larger workload.

EXPLANATION:
Clusters are editable, not immutable, and should not be recreated because of changes in
demand. Cluster autoscaling is an optional setting. You do not manage nodes via GCE,
directly--you always manage them through GKE, even though you can see them via GCE.

RESOURCES
● GKE Cluster Architecture
● Resizing a GKE Cluster
You are planning to run a single-node database on GKE. Which of the following things do
you need to consider?
A. You should use a DaemonSet object
B. GKE handles disk replication across pods
C. The data will likely be corrupted when a deployment changes or a pod fails
D. You should use DataSet and DataSetReplication objects
E. You should use PersistentVolume and PersistentVolumeClaim objects

EXPLANATION:
Databases are all about preserving information--about _keeping and not losing_ data--so we
need to make sure that GKE knows that we care about the data we store and need to keep it
around. To do this, we need Persistent Volumes and Persistent Volume Claims. GKE does not
replicate disks across pods; it ensures that the data for a pod persists and is still available to it
when it recovers from a failure.

RESOURCES
● Deploying a Stateful Application on GKE
● Why StatefulSets and not just persistent volumes?

You are designing the logging structure for a containerized Java application that will run
on GKE. Which of the following options is recommended and will use the least number of
steps to enable your developers to later access and search logs?
A. Have the developers write log lines to a file named stackdriver.log, install and run the
Stackdriver agent beside the application
B. Have the developers write log lines to a file named stackdriver.log
C. Have the developers write log lines to a file named application.log, install the Stackdriver
agent on the VMs, configure the Stackdriver agent to monitor and push application.log
D. Have the developers write log lines to stdout and stderr
E. Have the developers write logs using the App Engine Java SDK
F. Have the developers write log lines to stdout and stderr, install and run the Stackdriver
agent beside the application

EXPLANATION:
The App Engine SDKs only work for apps running on App Engine. Stackdriver does not
automatically send files named stackdriver.log . “Stackdriver Logging is enabled by default when
you create a new cluster using the gcloud command-line tool or Google Cloud Platform
Console.” Logging to stdout and stderr on GKE _is_ the recommended way to log: “Containers
offer an easy and standardized way to handle logs because you can write them to stdout and
stderr. Docker captures these log lines and allows you to access them by using the docker logs
command. As an application developer, you don't need to implement advanced logging
mechanisms. Use the native logging mechanisms instead.”
Google has just released a new XYZ service and you would like to try it out in your pre-
existing skunkworks project. How can you enable the XYZ API in the fewest number of
steps?
A. Open Cloud Shell, configure authentication, select the “defaults” project, run `gcloud
enable xyz service`
B. Open Cloud Shell, configure authentication, run `gcloud services enable
xyz.googleapis.com`
C. Open Cloud Shell, run `gcloud services enable xyz.googleapis.com`
D. Do nothing. It is enabled by default.
E. Since you have Gold-level support on this project, phone support to enable XYZ
F. Open Cloud Shell, run `gcloud services enable xyz`
G. Since you have Silver-level support on your linked billing account, email support to
enable XYZ

EXPLANATION:
Google does not generally enable new services by default for existing projects. Cloud Shell
does not require you to configure authentication. GCP Support does not get involved with things
like enabling APIs for you; that's something you simply do for yourself. The API URL in the
gcloud command to enable it includes `googleapis.com`.

Who can change the billing account linked to a project? (Choose 2)


A. Any project editor
B. Any project auditor
C. Any user of the project
D. The project owner
E. Any project billing administrator
F. Only Google Support

EXPLANATION:
Google Support does not generally get involved in changing project billing accounts. Auditors
cannot (should not be able to) make changes. Project editors and users do not have authority to
make billing changes.

You are designing the object security structure for sensitive customer information.
Which of the following should you be sure to include in your planning?
A. Ensure there is a honeypot, to support penetration testing.
B. Assign only limited access, to achieve least privilege.
C. Hash and salt all data, to limit the blast radius of any potential breach.
D. None of the other options is appropriate.
E. Randomize object names, to support security through obscurity.
F. Use both ACLs and roles, to achieve defense in depth.

EXPLANATION:
Least privilege is a paramount concern for data security, and you definitely do want to restrict
access as much as possible to support this. Hashing and salting _passwords_ is important, but
if you hash information you need to view (not just compare), then hashing will make it unusable.
ACLs and roles can both be used, but they will not create multiple layers of security that an
attacker would need to go through: any allow in either of them will suffice to view the data.
Hashing and salting _passwords_ is important, but if you hash information you need to view (not
just compare), then hashing will make it unusable. Security through obscurity is not an effective
strategy for securing data (or anything, really); you must assume that every attacker knows what
you know and still ensure data safety. Penetration testing can be used as a part of your overall
security strategy, but it doesn’t require a honeypot and is not your primary consideration.

RESOURCES
● OWASP Security by Design Principles

You are currently creating instances with `gcloud compute instances create myvm --
machine-type=n1-highmem-8`. This is good but you would just like a bit more RAM.
Which of the following replacements would be the most cost effective?
A. `gcloud compute instances create myvm --custom-cpu=1 --custom-memory=10`
B. `gcloud compute instances create myvm --custom-cpu=10 --custom-memory=60`
C. `gcloud compute instances create myvm --custom-cpu=8 --custom-memory=60`
D. `gcloud compute instances create myvm --machine-type=n1-highcpu-16`
E. `gcloud compute instances create myvm --machine-type=n1-highmem-16`
F. `gcloud compute instances create myvm --custom-cpu=2 --custom-memory=10`
G. `gcloud compute instances create myvm --machine-type=n1-highmem-10`

EXPLANATION:
For reference, the n1-highmem-8 has 8 CPUs and 52 GB of memory, but you do NOT need to
remember this. Just remember that predefined machine types are named by their CPU counts
and those are always powers of two--so `n1-highmem-10` is invalid. Custom machine types let
you tweak the predefined types, but you can’t add more RAM per CPU than you get with the
`highmem` machine types unless you use “Extended Memory”. But since the 8-CPU custom
type option does not include `--custom-extensions`, it doesn’t get Extended Memory and the
command won’t work. Since you’ll need to add more CPUs, you could go to `n1-highmem-16`--
but a custom machine type with only 10 CPUs will be less expensive than that.

RESOURCES
● Custom Machine Types
● Extended Memory
● Machine Types
● Resource-Based Pricing

You run the command `kubectl deploy-pod mypodname` in Cloud Shell. What should you
expect to see?
A. Status about the newly-deployed pod
B. An “unknown command” error
C. An authorization failure
D. An authentication failure
E. A configuration error

EXPLANATION:
This is not a valid command and kubectl will complain that it is unknown.

RESOURCES
● Cheat Sheet for kubectl
● Viewing Pods and Nodes in Kubernetes

You have a StatefulSet and a DaemonSet deployed in your GKE cluster which currently
has seven nodes. What will happen if you scale the cluster down to six nodes?
A. You will be unable to access the data from one StatefulSet pod.
B. Clients connecting to any Services will experience a momentary service interruption.
C. The size of any deployments will be decreased by one.
D. The number of pods for the DaemonSet will shrink.
E. All pods that were running on the terminated node will be restarted on other nodes.

EXPLANATION:
A DaemonSet runs one pod per GKE node.

RESOURCES
● GKE DaemonSets

How many projects can you create?


A. A maximum of one per five minutes
B. There are no limits on creating new projects
C. As many as Google Support will make for you
D. It doesn't matter, as you should really only need one
E. A maximum of five per month
F. As many as allowed by your quota
G. A maximum of five per second
EXPLANATION:
You do have a quota for the total number of projects you can have at once.

Which of the following is NOT a part of having a Java program running on a GCE
instance access the Cloud Tasks API in a Google-recommended way?
A. The access scopes should include access to the Cloud Tasks API
B. The Cloud Tasks API should be enabled
C. The service account should have access to the Cloud Tasks API
D. The GCE instance should be using a service account
E. The program should pass “Metadata-Flavor: Google” to the SDK
F. The program should use the Google SDK

EXPLANATION:
Java programs can use the SDK to access GCP services, and the SDK will take care of the
details of retrieving the access token from the metadata service and communicating with the
service. As such, your program need not concern itself with the “Metadata-Flavor: Google”
header; the SDK will handle that.

RESOURCES
● Compute Engine Service Accounts
● Using OAuth 2.0 for Server to Server Applications

When comparing `n1-standard-8`, `n1-highcpu-8`, and `n1-highmem-16`, which of the


following statements are true? (Choose 3)
A. The `n1-highcpu-8` and `n1-highmem-16` cost about the same amount.
B. The `n1-standard-8` has the least RAM
C. The `n1-highcpu-8` costs more than the `n1-highmem-16`.
D. The `n1-highcpu-8` costs less than the `n1-highmem-16`.
E. The `n1-highmem-16` has the most RAM
F. The `n1-highmem-16` has the least CPUs
G. The `n1-highmem-16` has the most CPUs

EXPLANATION:
The number at the end of the machine type indicates how many CPUs it has, and the type tells
you where in the range of allowable RAM that machine falls--from minimum (highcpu) to
balanced (standard) to maximum (highmem). The cost of each machine type is determined by
how much CPU and RAM it uses. Understanding that is enough to correctly answer this
question.

RESOURCES
● Machine Types
● Resource-Based Pricing

You are designing the logging structure for a non-containerized Java application that will
run on GCE. Which of the following options is recommended and will use the least
number of steps to enable your developers to later access and search logs?
A. Have the developers write log lines to a file named stackdriver.log
B. Have the developers write logs using the App Engine Java SDK
C. Have the developers write log lines to a file named stackdriver.log, install and run the
Stackdriver agent beside the application
D. Have the developers write log lines to a file named application.log, install the Stackdriver
agent on the VMs, configure the Stackdriver agent to monitor and push application.log
E. Have the developers write log lines to stdout and stderr
F. Have the developers write log lines to stdout and stderr, install and run the Stackdriver
agent beside the application

EXPLANATION:
The App Engine SDKs only work for apps running on App Engine. Stackdriver does not
automatically send files named stackdriver.log . Stackdriver is not installed by default on GCE.
Logging to stdout and stderr on GCE is not the recommended way to get logs to Stackdriver;
configuring a custom log file location is.

RESOURCES
● How To Do Logging on GCE
● Logging Agent Configuration (Just Skim)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy