CIS Google Kubernetes Engine (GKE) Benchmark v1.5.0 PDF
CIS Google Kubernetes Engine (GKE) Benchmark v1.5.0 PDF
Page 1
Table of Contents
Terms of Use ..................................................................................................................... 1
Table of Contents ............................................................................................................. 2
Overview ............................................................................................................................ 6
Intended Audience ..................................................................................................................... 6
Consensus Guidance................................................................................................................. 7
Typographical Conventions ...................................................................................................... 8
Recommendation Definitions ......................................................................................... 9
Title............................................................................................................................................... 9
Assessment Status .................................................................................................................... 9
Automated ............................................................................................................................................... 9
Manual...................................................................................................................................................... 9
Profile ........................................................................................................................................... 9
Description .................................................................................................................................. 9
Rationale Statement ................................................................................................................... 9
Impact Statement...................................................................................................................... 10
Audit Procedure........................................................................................................................ 10
Remediation Procedure ........................................................................................................... 10
Default Value ............................................................................................................................. 10
References ................................................................................................................................ 10
CIS Critical Security Controls® (CIS Controls®) .................................................................... 10
Additional Information ............................................................................................................. 10
Profile Definitions ..................................................................................................................... 11
Acknowledgements .................................................................................................................. 12
Recommendations ......................................................................................................... 14
1 Control Plane Components .................................................................................................. 14
2 Control Plane Configuration ................................................................................................ 14
2.1 Authentication and Authorization ................................................................................................. 15
2.1.1 Client certificate authentication should not be used for users (Manual) .......................... 16
3 Worker Nodes ........................................................................................................................ 17
3.1 Worker Node Configuration Files ................................................................................................. 18
3.1.1 Ensure that the proxy kubeconfig file permissions are set to 644 or more restrictive
(Manual) ..................................................................................................................................... 19
3.1.2 Ensure that the proxy kubeconfig file ownership is set to root:root (Manual) .................. 21
3.1.3 Ensure that the kubelet configuration file has permissions set to 600 (Manual) ............. 23
3.1.4 Ensure that the kubelet configuration file ownership is set to root:root (Manual) ............ 25
4 Policies ................................................................................................................................... 26
Page 2
4.1 RBAC and Service Accounts......................................................................................................... 27
4.1.1 Ensure that the cluster-admin role is only used where required (Manual) ...................... 28
4.1.2 Minimize access to secrets (Manual) ............................................................................... 30
4.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) ............................................ 32
4.1.4 Minimize access to create pods (Manual) ........................................................................ 34
4.1.5 Ensure that default service accounts are not actively used (Manual).............................. 36
4.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) ...... 38
4.1.7 Avoid use of system:masters group (Manual) .................................................................. 40
4.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster
(Manual) ..................................................................................................................................... 42
4.1.9 Minimize access to create persistent volumes (Manual) ................................................. 44
4.1.10 Minimize access to the proxy sub-resource of nodes (Manual) ..................................... 45
4.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects
(Manual) ..................................................................................................................................... 47
4.1.12 Minimize access to webhook configuration objects (Manual) ........................................ 49
4.1.13 Minimize access to the service account token creation (Manual) .................................. 50
4.2 Pod Security Standards ................................................................................................................. 51
4.2.1 Ensure that the cluster enforces Pod Security Standard Baseline profile or stricter for all
namespaces. (Manual) .............................................................................................................. 52
4.3 Network Policies and CNI ................................................................................................... 54
4.3.1 Ensure that the CNI in use supports Network Policies (Manual) . 55
4.3.2 Ensure that all Namespaces have Network Policies defined
(Manual).................................................................................................................................. 57
4.4 Secrets Management ................................................................................................................ 59
4.4.1 Prefer using secrets as files over secrets as environment
variables (Manual) ............................................................................................................ 60
4.4.2 Consider external secret storage (Manual) ............................................ 62
4.5 Extensible Admission Control .......................................................................................... 64
4.5.1 Configure Image Provenance using ImagePolicyWebhook admission
controller (Manual).......................................................................................................... 65
4.6 General Policies .................................................................................................................... 67
4.6.1 Create administrative boundaries between resources using
namespaces (Manual).......................................................................................................... 68
4.6.2 Ensure that the seccomp profile is set to RuntimeDefault in the
pod definitions (Manual) ............................................................................................... 70
4.6.3 Apply Security Context to Pods and Containers (Manual) ................ 72
4.6.4 The default namespace should not be used (Manual) ........................... 74
5 Managed services............................................................................................................. 75
5.1 Image Registry and Image Scanning ............................................................................... 76
5.1.1 Ensure Image Vulnerability Scanning is enabled (Automated) ....... 77
5.1.2 Minimize user access to Container Image repositories (Manual) . 80
5.1.3 Minimize cluster access to read-only for Container Image
repositories (Manual) ..................................................................................................... 85
5.1.4 Minimize Container Registries to only those approved (Manual) . 89
5.2 Identity and Access Management (IAM) ......................................................................... 92
5.2.1 Ensure GKE clusters are not running using the Compute Engine
default service account (Automated) ....................................................................... 93
5.2.2 Prefer using dedicated GCP Service Accounts and Workload
Identity (Manual) .............................................................................................................. 97
5.3 Cloud Key Management Service (Cloud KMS) ..............................................................100
5.3.1 Ensure Kubernetes Secrets are encrypted using keys managed in
Cloud KMS (Automated) ...................................................................................................101
5.4 Node Metadata .........................................................................................................................104
Page 3
5.4.1 Ensure legacy Compute Engine instance metadata APIs are
Disabled (Automated) .....................................................................................................105
5.4.2 Ensure the GKE Metadata Server is Enabled (Automated) ................108
5.5 Node Configuration and Maintenance ...........................................................................111
5.5.1 Ensure Container-Optimized OS (cos_containerd) is used for GKE
node images (Automated) ...............................................................................................112
5.5.2 Ensure Node Auto-Repair is enabled for GKE nodes (Automated) .114
5.5.3 Ensure Node Auto-Upgrade is enabled for GKE nodes (Automated)
.................................................................................................................................................116
5.5.4 When creating New Clusters - Automate GKE version management
using Release Channels (Manual) .............................................................................119
5.5.5 Ensure Shielded GKE Nodes are Enabled (Automated) .........................122
5.5.6 Ensure Integrity Monitoring for Shielded GKE Nodes is Enabled
(Automated) .........................................................................................................................125
5.5.7 Ensure Secure Boot for Shielded GKE Nodes is Enabled
(Automated) .........................................................................................................................127
5.6 Cluster Networking ..............................................................................................................129
5.6.1 Enable VPC Flow Logs and Intranode Visibility (Automated)........130
5.6.2 Ensure use of VPC-native clusters (Automated) ..................................133
5.6.3 Ensure Control Plane Authorized Networks is Enabled (Automated)
.................................................................................................................................................136
5.6.4 Ensure clusters are created with Private Endpoint Enabled and
Public Access Disabled (Automated) .......................................................................139
5.6.5 Ensure clusters are created with Private Nodes (Automated) .....141
5.6.6 Consider firewalling GKE worker nodes (Manual)................................143
5.6.7 Ensure Network Policy is Enabled and set as appropriate
(Automated) .........................................................................................................................146
5.6.8 Ensure use of Google-managed SSL Certificates (Manual) ..............149
5.7 Logging ......................................................................................................................................151
5.7.1 Ensure Logging and Cloud Monitoring is Enabled (Automated) .....152
5.7.2 Enable Linux auditd logging (Manual) .....................................................155
5.8 Authentication and Authorization................................................................................158
5.8.1 Ensure authentication using Client Certificates is Disabled
(Automated) .........................................................................................................................159
5.8.2 Manage Kubernetes RBAC users with Google Groups for GKE
(Manual)................................................................................................................................162
5.8.3 Ensure Legacy Authorization (ABAC) is Disabled (Automated) .....164
5.9 Storage ......................................................................................................................................166
5.9.1 Enable Customer-Managed Encryption Keys (CMEK) for GKE
Persistent Disks (PD) (Manual)................................................................................167
5.10 Other Cluster Configurations ......................................................................................170
5.10.1 Ensure Kubernetes Web UI is Disabled (Automated) .........................171
5.10.2 Ensure that Alpha clusters are not used for production
workloads (Automated) ...................................................................................................173
5.10.3 Consider GKE Sandbox for running untrusted workloads (Manual)
.................................................................................................................................................175
5.10.4 Ensure use of Binary Authorization (Automated) .............................178
5.10.5 Enable Cloud Security Command Center (Cloud SCC) (Manual) .....181
5.10.6 Enable Security Posture (Manual) ............................................................183
Page 4
Appendix: CIS Controls v7 IG 2 Mapped Recommendations ................................. 192
Appendix: CIS Controls v7 IG 3 Mapped Recommendations ................................. 195
Appendix: CIS Controls v7 Unmapped Recommendations .................................... 198
Appendix: CIS Controls v8 IG 1 Mapped Recommendations ................................. 199
Appendix: CIS Controls v8 IG 2 Mapped Recommendations ................................. 201
Appendix: CIS Controls v8 IG 3 Mapped Recommendations ................................. 204
Appendix: CIS Controls v8 Unmapped Recommendations .................................... 207
Appendix: Change History .......................................................................................... 208
Page 5
Overview
All CIS Benchmarks focus on technical configuration settings used to maintain and/or
increase the security of the addressed technology, and they should be used in
conjunction with other essential cyber hygiene tasks like:
• Monitoring the base operating system for vulnerabilities and quickly updating with
the latest security patches
• Monitoring applications and libraries for vulnerabilities and quickly updating with
the latest security patches
In the end, the CIS Benchmarks are designed as a key component of a comprehensive
cybersecurity program.
This document provides prescriptive guidance for running Google Kubernetes Engine
(GKE) v1.27.3, 1.27.7 & 1.28.3 following recommended security controls. This
benchmark only includes controls which can be modified by an end user of GKE. For
information on GKE's performance against the Kubernetes CIS benchmarks, for items
which cannot be audited or modified, see the GKE documentation at
https://cloud.google.com/kubernetes-engine/docs/concepts/cis-benchmarks.
For the latest GKE hardening guide, see g.co/gke/hardening.
To obtain the latest version of this guide, please visit www.cisecurity.org. If you have
questions, comments, or have identified ways to improve this guide, please write us at
support@cisecurity.org.
Intended Audience
This document is intended for cluster administrators, security specialists, auditors, and
any personnel who plan to develop, deploy, assess, or secure solutions that incorporate
Google Kubernetes Engine (GKE).
Page 6
Consensus Guidance
This CIS Benchmark was created using a consensus review process comprised of a
global community of subject matter experts. The process combines real world
experience with data-based information to create technology specific guidance to assist
users to secure their environments. Consensus participants provide perspective from a
diverse set of backgrounds including consulting, software development, audit and
compliance, security research, operations, government, and legal.
Each CIS Benchmark undergoes two phases of consensus review. The first phase
occurs during initial Benchmark development. During this phase, subject matter experts
convene to discuss, create, and test working drafts of the Benchmark. This discussion
occurs until consensus has been reached on Benchmark recommendations. The
second phase begins after the Benchmark has been published. During this phase, all
feedback provided by the Internet community is reviewed by the consensus team for
incorporation in the Benchmark. If you are interested in participating in the consensus
process, please visit https://workbench.cisecurity.org/.
Page 7
Typographical Conventions
The following typographical conventions are used throughout this guide:
Convention Meaning
Page 8
Recommendation Definitions
The following defines the various components included in a CIS recommendation as
applicable. If any of the components are not applicable it will be noted or the
component will not be included in the recommendation.
Title
Concise description for the recommendation's intended configuration.
Assessment Status
An assessment status is included for every recommendation. The assessment status
indicates whether the given recommendation can be automated or requires manual
steps to implement. Both statuses are equally important and are determined and
supported as defined below:
Automated
Represents recommendations for which assessment of a technical control can be fully
automated and validated to a pass/fail state. Recommendations will include the
necessary information to implement automation.
Manual
Represents recommendations for which assessment of a technical control cannot be
fully automated and requires all or some manual steps to validate that the configured
state is set as expected. The expected state can vary depending on the environment.
Profile
A collection of recommendations for securing a technology or a supporting platform.
Most benchmarks include at least a Level 1 and Level 2 Profile. Level 2 extends Level 1
recommendations and is not a standalone profile. The Profile Definitions section in the
benchmark provides the definitions as they pertain to the recommendations included for
the technology.
Description
Detailed information pertaining to the setting with which the recommendation is
concerned. In some cases, the description will include the recommended value.
Rationale Statement
Detailed reasoning for the recommendation to provide the user a clear and concise
understanding on the importance of the recommendation.
Page 9
Impact Statement
Any security, functionality, or operational consequences that can result from following
the recommendation.
Audit Procedure
Systematic instructions for determining if the target system complies with the
recommendation.
Remediation Procedure
Systematic instructions for applying recommendations to the target system to bring it
into compliance according to the recommendation.
Default Value
Default value for the given setting in this recommendation, if known. If not known, either
not configured or not defined will be applied.
References
Additional documentation relative to the recommendation.
Additional Information
Supplementary information that does not correspond to any other field but may be
useful to the user.
Page 10
Profile Definitions
The following configuration profiles are defined by this Benchmark:
• Level 1
Level 1
• Level 2
Extends Level 1
Page 11
Acknowledgements
This Benchmark exemplifies the great things a community of users, vendors, and
subject matter experts can accomplish through consensus collaboration. The CIS
community thanks the entire consensus team with special recognition to the following
individuals who contributed greatly to the creation of this guide:
This benchmark was developed by Rowan Baker, Andrew Martin, and Kevin Ward, with
input from Randall Mowen, Greg Castle, Andrew Kiggins, Iulia Ion, Jordan Liggitt, Maya
Kaczorowski, Mark Wolters, Poonam Lamba, Michele Chubirka, Shannon Kularathana,
Vinayak Goyal.
Special Thanks to the Google team of: Poonam Lamba, Michele Chubirka, Shannon
Kularathana, Vinayak Goyal.
Author/s
Andrew Martin
Rowan Baker
Kevin Ward
Editor/s
Randall Mowen
Poonam Lamba
Michele Chubirka
Shannon Kularathana
Vinayak Goyal
Contributor/s
Rory Mccune
Jordan Liggitt
Liz Rice
Maya Kaczorowski
Mark Wolters
Iulia Ion
Andrew Kiggins
Greg Castle
Mark Larinde
Andrew Thompson
Gareth Boyes
Rachel Rice
Andrew Peabody
Page 12
Page 13
Recommendations
1 Control Plane Components
Under the GCP Shared Responsibility Model, Google manages the GKE control plane
components for you. The control plane includes the Kubernetes API server, etcd, and a
number of controllers. Google is responsible for securing the control plane, though you
might be able to configure certain options based on your requirements. Section 3 of this
Benchmark addresses these configurations.
You as the end user are responsible for securing your nodes, containers, and Pods and
that is what this Benchmark specifically addresses.
This document describes how cluster control plane components are secured in Google
Kubernetes
Page 14
2.1 Authentication and Authorization
Page 15
2.1.1 Client certificate authentication should not be used for users
(Manual)
Profile Applicability:
• Level 1
Description:
Kubernetes provides the option to use client certificates for user authentication.
However as there is no way to revoke these certificates when a user leaves an
organization or loses their credential, they are not suitable for this purpose.
It is not possible to fully disable client certificate use within a cluster as it is used for
component to component authentication.
Rationale:
With any authentication mechanism the ability to revoke credentials if they are
compromised or no longer required, is a key control. Kubernetes client certificate
authentication does not allow for this due to a lack of support for certificate revocation.
See also Recommendation 5.8.2 for GKE specifically.
Impact:
1. https://cloud.google.com/kubernetes-engine/docs/concepts/cis-benchmarks
Page 16
Additional Information:
The lack of certificate revocation was flagged up as a high risk issue in the recent
Kubernetes security audit. Without this feature, client certificate authentication is not
suitable for end users.
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
3 Worker Nodes
This section consists of security recommendations for the components that run on GKE
worker nodes.
Page 17
3.1 Worker Node Configuration Files
This section covers recommendations for configuration files on the worker nodes.
Page 18
3.1.1 Ensure that the proxy kubeconfig file permissions are set to
644 or more restrictive (Manual)
Profile Applicability:
• Level 1
Description:
If kube-proxy is running, and if it is configured by a kubeconfig file, ensure that the proxy
kubeconfig file has permissions of 644 or more restrictive.
Rationale:
The kube-proxy kubeconfig file controls various parameters of the kube-proxy service
on the worker node. You should restrict its file permissions to maintain the integrity of
the file. The file should be writable only by the administrators on the system.
Impact:
Page 19
stat -c %a /var/lib/kube-proxy/kubeconfig
The output of the above command gives you the kubeconfig file's permissions.
Verify that if a file is specified and it exists, the permissions are 644 or more restrictive.
Remediation:
Run the below command (based on the file location on your system) on the each worker
node. For example,
chmod 644 <proxy kubeconfig file>
Default Value:
1. https://kubernetes.io/docs/admin/kube-proxy/
2. https://cloud.google.com/kubernetes-engine/docs/concepts/cis-benchmarks
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 20
3.1.2 Ensure that the proxy kubeconfig file ownership is set to
root:root (Manual)
Profile Applicability:
• Level 1
Description:
If kube-proxy is running, ensure that the file ownership of its kubeconfig file is set to
root:root.
Rationale:
The kubeconfig file for kube-proxy controls various parameters for the kube-proxy
service in the worker node. You should set its file ownership to maintain the integrity of
the file. The file should be owned by root:root.
Impact:
Overly permissive file access increases the security risk to the platform.
Audit:
Page 21
stat -c %U:%G /var/lib/kube-proxy/kubeconfig
The output of the above command gives you the kubeconfig file's ownership. Verify that
the ownership is set to root:root.
Remediation:
Run the below command (based on the file location on your system) on each worker
node. For example,
chown root:root <proxy kubeconfig file>
Default Value:
1. https://kubernetes.io/docs/admin/kube-proxy/
2. https://cloud.google.com/kubernetes-engine/docs/concepts/cis-benchmarks
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 22
3.1.3 Ensure that the kubelet configuration file has permissions
set to 600 (Manual)
Profile Applicability:
• Level 1
Description:
Ensure that if the kubelet configuration file exists, it has permissions of 600.
Rationale:
The kubelet reads various parameters, including security settings, from a config file
specified by the --config argument. If this file exists, you should restrict its file
permissions to maintain the integrity of the file. The file should be writable by only the
administrators on the system.
Impact:
Overly permissive file access increases the security risk to the platform.
Audit:
Using Google Cloud Console
Page 23
stat -c %a /home/kubernetes/kubelet-config.yaml
The output of the above command is the Kubelet config file's permissions. Verify that
the permissions are 600 or more restrictive.
Remediation:
Run the following command (using the kubelet config file location):
chmod 600 <kubelet_config_file>
Default Value:
The default permissions for the kubelet configuration file are 600.
References:
1. https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/
2. https://cloud.google.com/kubernetes-engine/docs/concepts/cis-benchmarks
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 24
3.1.4 Ensure that the kubelet configuration file ownership is set to
root:root (Manual)
Profile Applicability:
• Level 1
Description:
Ensure that if the kubelet configuration file exists, it is owned by root:root.
Rationale:
The kubelet reads various parameters, including security settings, from a config file
specified by the --config argument. If this file is specified you should restrict its file
permissions to maintain the integrity of the file. The file should be owned by root:root.
Impact:
Overly permissive file access increases the security risk to the platform.
Audit:
Using Google Cloud Console
Page 25
stat -c %U:%G /home/kubernetes/kubelet-config.yaml
The output of the above command is the Kubelet config file's ownership. Verify that the
ownership is set to root:root
Remediation:
Run the following command (using the config file location identified in the Audit step):
chown root:root <kubelet_config_file>
Default Value:
The default file ownership is root:root.
References:
1. https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/
2. https://cloud.google.com/kubernetes-engine/docs/concepts/cis-benchmarks
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
4 Policies
This section contains recommendations for various Kubernetes policies which are
important to the security of the environment.
Page 26
4.1 RBAC and Service Accounts
Page 27
4.1.1 Ensure that the cluster-admin role is only used where
required (Manual)
Profile Applicability:
• Level 1
Description:
The RBAC role cluster-admin provides wide-ranging powers over the environment and
should be used only where and when needed.
Rationale:
Kubernetes provides a set of default roles where RBAC is used. Some of these roles
such as cluster-admin provide wide-ranging privileges which should only be applied
where absolutely necessary. Roles such as cluster-admin allow super-user access to
perform any action on any resource. When used in a ClusterRoleBinding, it gives full
control over every resource in the cluster and in all namespaces. When used in a
RoleBinding, it gives full control over every resource in the rolebinding's namespace,
including the namespace itself.
Impact:
Care should be taken before removing any clusterrolebindings from the environment
to ensure they were not required for operation of the cluster. Specifically, modifications
should not be made to clusterrolebindings with the system: prefix as they are
required for the operation of system components.
Audit:
Obtain a list of the principals who have access to the cluster-admin role by reviewing
the clusterrolebinding output for each role binding that has access to the cluster-
admin role.
Remediation:
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and if
they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower-privileged role and then remove the
clusterrolebinding to the cluster-admin role :
Page 28
kubectl delete clusterrolebinding [name]
Default Value:
References:
1. https://kubernetes.io/docs/concepts/cluster-administration/
2. https://kubernetes.io/docs/reference/access-authn-authz/rbac/
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 29
4.1.2 Minimize access to secrets (Manual)
Profile Applicability:
• Level 1
Description:
The Kubernetes API stores secrets, which may be service account tokens for the
Kubernetes API or credentials used by workloads in the cluster. Access to these secrets
should be restricted to the smallest possible group of users to reduce the risk of
privilege escalation.
Rationale:
Inappropriate access to secrets stored within the Kubernetes cluster can allow for an
attacker to gain additional access to the Kubernetes cluster or external resources
whose credentials are stored as secrets.
Impact:
Care should be taken not to remove access to secrets to system components which
require this for their operation
Audit:
Review the users who have get, list or watch access to secrets objects in the
Kubernetes API.
Remediation:
Where possible, remove get, list and watch access to secret objects in the cluster.
Page 30
Default Value:
CLUSTERROLEBINDING SUBJECT
TYPE SA-NAMESPACE
cluster-admin system:masters
Group
system:controller:clusterrole-aggregation-controller clusterrole-
aggregation-controller ServiceAccount kube-system
system:controller:expand-controller expand-controller
ServiceAccount kube-system
system:controller:generic-garbage-collector generic-garbage-
collector ServiceAccount kube-system
system:controller:namespace-controller namespace-controller
ServiceAccount kube-system
system:controller:persistent-volume-binder persistent-volume-
binder ServiceAccount kube-system
system:kube-controller-manager system:kube-controller-
manager User
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 31
4.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
Profile Applicability:
• Level 1
Description:
Kubernetes Roles and ClusterRoles provide access to resources based on sets of
objects and actions that can be taken on those objects. It is possible to set either of
these to be the wildcard "*", which matches all items.
Use of wildcards is not optimal from a security perspective as it may allow for
inadvertent access to be granted when new resources are added to the Kubernetes API
either as CRDs or in later versions of the product.
Rationale:
The principle of least privilege recommends that users are provided only the access
required for their role and nothing more. The use of wildcard rights grants is likely to
provide excessive rights to the Kubernetes API.
Audit:
Retrieve the roles defined across each namespaces in the cluster and review for
wildcards
kubectl get roles --all-namespaces -o yaml
Retrieve the cluster roles defined in the cluster and review for wildcards
kubectl get clusterroles -o yaml
Remediation:
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
References:
1. https://kubernetes.io/docs/reference/access-authn-authz/rbac/
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 32
Controls
Control IG 1 IG 2 IG 3
Version
Page 33
4.1.4 Minimize access to create pods (Manual)
Profile Applicability:
• Level 1
Description:
The ability to create pods in a namespace can provide a number of opportunities for
privilege escalation, such as assigning privileged service accounts to these pods or
mounting hostPaths with access to sensitive data (unless Pod Security Policies are
implemented to restrict this access)
As such, access to create new pods should be restricted to the smallest possible group
of users.
Rationale:
The ability to create pods in a cluster opens up possibilities for privilege escalation and
should be restricted, where possible.
Impact:
Care should be taken not to remove access to pods to system components which
require this for their operation
Audit:
Review the users who have create access to pod objects in the Kubernetes API.
Remediation:
Where possible, remove create access to pod objects in the cluster.
Page 34
Default Value:
CLUSTERROLEBINDING SUBJECT
TYPE SA-NAMESPACE
cluster-admin system:masters
Group
system:controller:clusterrole-aggregation-controller clusterrole-
aggregation-controller ServiceAccount kube-system
system:controller:daemon-set-controller daemon-set-controller
ServiceAccount kube-system
system:controller:job-controller job-controller
ServiceAccount kube-system
system:controller:persistent-volume-binder persistent-volume-
binder ServiceAccount kube-system
system:controller:replicaset-controller replicaset-controller
ServiceAccount kube-system
system:controller:replication-controller replication-controller
ServiceAccount kube-system
system:controller:statefulset-controller statefulset-controller
ServiceAccount kube-system
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 35
4.1.5 Ensure that default service accounts are not actively used
(Manual)
Profile Applicability:
• Level 1
Description:
The default service account should not be used to ensure that rights granted to
applications can be more easily audited and reviewed.
Rationale:
Kubernetes provides a default service account which is used by cluster workloads
where no specific service account is assigned to the pod.
Where access to the Kubernetes API from a pod is required, a specific service account
should be created for that pod, and rights granted to that service account.
The default service account should be configured such that it does not provide a service
account token and does not have any explicit rights assignments.
Impact:
All workloads which require access to the Kubernetes API will require an explicit service
account to be created.
Audit:
For each namespace in the cluster, review the rights assigned to the default service
account and ensure that it has no roles or cluster roles bound to it apart from the
defaults.
Additionally ensure that the automountServiceAccountToken: false setting is in place
for each default service account.
Remediation:
Default Value:
By default the default service account allows for its service account token to be
mounted in pods in its namespace.
Page 36
References:
1. https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-
account/
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 37
4.1.6 Ensure that Service Account Tokens are only mounted
where necessary (Manual)
Profile Applicability:
• Level 1
Description:
Service accounts tokens should not be mounted in pods except where the workload
running in the pod explicitly needs to communicate with the API server
Rationale:
Mounting service account tokens inside pods can provide an avenue for privilege
escalation attacks where an attacker is able to compromise a single pod in the cluster.
Avoiding mounting these tokens removes this attack avenue.
Impact:
Pods mounted without service account tokens will not be able to communicate with the
API server, except where the resource is available to unauthenticated principals.
Audit:
Review pod and service account objects in the cluster and ensure that the option below
is set, unless the resource explicitly requires this access.
automountServiceAccountToken: false
Remediation:
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
Default Value:
By default, all pods get a service account token mounted in them.
References:
1. https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-
account/
Page 38
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 39
4.1.7 Avoid use of system:masters group (Manual)
Profile Applicability:
• Level 1
Description:
The special group system:masters should not be used to grant permissions to any user
or service account, except where strictly necessary (e.g. bootstrapping access prior to
RBAC being fully available)
Rationale:
The system:masters group has unrestricted access to the Kubernetes API hard-coded
into the API server source code. An authenticated user who is a member of this group
cannot have their access reduced, even if all bindings and cluster role bindings which
mention it, are removed.
When combined with client certificate authentication, use of this group can allow for
irrevocable cluster-admin level credentials to exist for a cluster.
Impact:
Remediation:
Default Value:
By default some clusters will create a "break glass" client certificate which is a member
of this group. Access to this client certificate should be carefully controlled and it should
not be used for general cluster operations.
References:
1. https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/rbac/escalatio
n_check.go#L38
Page 40
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 41
4.1.8 Limit use of the Bind, Impersonate and Escalate
permissions in the Kubernetes cluster (Manual)
Profile Applicability:
• Level 1
Description:
Cluster roles and roles with the impersonate, bind or escalate permissions should not
be granted unless strictly required. Each of these permissions allow a particular subject
to escalate their privileges beyond those explicitly granted by cluster administrators
Rationale:
The impersonate privilege allows a subject to impersonate other users gaining their
rights to the cluster. The bind privilege allows the subject to add a binding to a cluster
role or role which escalates their effective permissions in the cluster. The escalate
privilege allows a subject to modify cluster roles to which they are bound, increasing
their rights to that level.
Each of these permissions has the potential to allow for privilege escalation to cluster-
admin level.
Impact:
There are some cases where these permissions are required for cluster service
operation, and care should be taken before removing these permissions from system
service accounts.
Audit:
Review the users who have access to cluster roles or roles which provide the
impersonate, bind or escalate privileges.
Remediation:
Where possible, remove the impersonate, bind and escalate rights from subjects.
Default Value:
1. https://www.impidio.com/blog/kubernetes-rbac-security-pitfalls
2. https://raesene.github.io/blog/2020/12/12/Escalating_Away/
3. https://raesene.github.io/blog/2021/01/16/Getting-Into-A-Bind-with-Kubernetes/
Page 42
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 43
4.1.9 Minimize access to create persistent volumes (Manual)
Profile Applicability:
• Level 1
Description:
The ability to create persistent volumes in a cluster can provide an opportunity for
privilege escalation, via the creation of hostPath volumes. As persistent volumes are not
covered by Pod Security Admission, a user with access to create persistent volumes
may be able to get access to sensitive files from the underlying host even where
restrictive Pod Security Admission policies are in place.
Rationale:
The ability to create persistent volumes in a cluster opens up possibilities for privilege
escalation and should be restricted, where possible.
Audit:
Review the users who have create access to PersistentVolume objects in the
Kubernetes API.
Remediation:
Where possible, remove create access to PersistentVolume objects in the cluster.
References:
1. https://kubernetes.io/docs/concepts/security/rbac-good-practices/#persistent-
volume-creation
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 44
4.1.10 Minimize access to the proxy sub-resource of nodes
(Manual)
Profile Applicability:
• Level 1
Description:
Users with access to the Proxy sub-resource of Node objects automatically have
permissions to use the Kubelet API, which may allow for privilege escalation or bypass
cluster security controls such as audit logs.
The Kubelet provides an API which includes rights to execute commands in any
container running on the node. Access to this API is covered by permissions to the main
Kubernetes API via the node object. The proxy sub-resource specifically allows wide
ranging access to the Kubelet API.
Direct access to the Kubelet API bypasses controls like audit logging (there is no audit
log of Kubelet API access) and admission control.
Rationale:
The ability to use the proxy sub-resource of node objects opens up possibilities for
privilege escalation and should be restricted, where possible.
Audit:
Review the users who have access to the proxy sub-resource of node objects in the
Kubernetes API.
Remediation:
Where possible, remove access to the proxy sub-resource of node objects.
References:
1. https://kubernetes.io/docs/concepts/security/rbac-good-practices/#access-to-
proxy-subresource-of-nodes
2. https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-
authz/#kubelet-authorization
Page 45
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 46
4.1.11 Minimize access to the approval sub-resource of
certificatesigningrequests objects (Manual)
Profile Applicability:
• Level 1
Description:
Users with access to the update the approval sub-resource of
certificateaigningrequest objects can approve new client certificates for the
Kubernetes API effectively allowing them to create new high-privileged user accounts.
This can allow for privilege escalation to full cluster administrator, depending on users
configured in the cluster
Rationale:
The ability to update certificate signing requests should be limited.
Audit:
Review the users who have access to update the approval sub-resource of
certificatesigningrequest objects in the Kubernetes API.
Remediation:
Where possible, remove access to the approval sub-resource of
certificatesigningrequest objects.
References:
1. https://kubernetes.io/docs/concepts/security/rbac-good-practices/#csrs-and-
certificate-issuing
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 47
Page 48
4.1.12 Minimize access to webhook configuration objects
(Manual)
Profile Applicability:
• Level 1
Description:
Users with rights to create/modify/delete validatingwebhookconfigurations or
mutatingwebhookconfigurations can control webhooks that can read any object
admitted to the cluster, and in the case of mutating webhooks, also mutate admitted
objects. This could allow for privilege escalation or disruption of the operation of the
cluster.
Rationale:
The ability to manage webhook configuration should be limited
Audit:
Review the users who have access to validatingwebhookconfigurations or
mutatingwebhookconfigurations objects in the Kubernetes API.
Remediation:
Where possible, remove access to the validatingwebhookconfigurations or
mutatingwebhookconfigurations objects
References:
1. https://kubernetes.io/docs/concepts/security/rbac-good-practices/#control-
admission-webhooks
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 49
4.1.13 Minimize access to the service account token creation
(Manual)
Profile Applicability:
• Level 1
Description:
Users with rights to create new service account tokens at a cluster level, can create
long-lived privileged credentials in the cluster. This could allow for privilege escalation
and persistent access to the cluster, even if the users account has been revoked.
Rationale:
The ability to create service account tokens should be limited.
Audit:
Review the users who have access to create the token sub-resource of serviceaccount
objects in the Kubernetes API.
Remediation:
Where possible, remove access to the token sub-resource of serviceaccount objects.
References:
1. https://kubernetes.io/docs/concepts/security/rbac-good-practices/#token-request
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 50
4.2 Pod Security Standards
Pod Security Standards (PSS) are recommendations for securing deployed workloads
to reduce the risks of container breakout. There are a number of ways if implementing
PSS, including the built-in Pod Security Admission controller, or external policy control
systems which integrate with Kubernetes via validating and mutating webhooks.
Page 51
4.2.1 Ensure that the cluster enforces Pod Security Standard
Baseline profile or stricter for all namespaces. (Manual)
Profile Applicability:
• Level 1
Description:
The Pod Security Standard Baseline profile defines a baseline for container security.
You can enforce this by using the built-in Pod Security Admission controller.
Rationale:
Without an active mechanism to enforce the Pod Security Standard Baseline profile, it is
not possible to limit the use of containers with access to underlying cluster nodes, via
mechanisms like privileged containers, or the use of hostPath volume mounts.
Audit:
Run the following command to list the namespaces that don't have the the baseline
policy enforced.
diff \
<(kubectl get namespace -l pod-security.kubernetes.io/enforce=baseline -o
jsonpath='{range .items[*]}{.metadata.name}{"\n"}') \
<(kubectl get namespace -o jsonpath='{range
.items[*]}{.metadata.name}{"\n"}')
Remediation:
Ensure that Pod Security Admission is in place for every namespace which
contains user workloads.
Run the following command to enforce the Baseline profile in a namespace:-
Default Value:
References:
1. https://kubernetes.io/docs/concepts/security/pod-security-admission
2. https://kubernetes.io/docs/concepts/security/pod-security-standards
3. https://cloud.google.com/kubernetes-engine/docs/concepts/about-
security-posture-dashboard
Page 52
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 53
4.3 Network Policies and CNI
Page 54
4.3.1 Ensure that the CNI in use supports Network Policies (Manual)
Profile Applicability:
• Level 1
Description:
There are a variety of CNI plugins available for Kubernetes. If the CNI in
use does not support Network Policies it may not be possible to effectively
restrict traffic in the cluster.
Rationale:
Kubernetes network policies are enforced by the CNI plugin in use. As such it
is important to ensure that the CNI plugin supports both Ingress and Egress
network policies.
See also recommendation 5.6.7.
Impact:
None
Audit:
Review the documentation of CNI plugin in use by the cluster, and confirm
that it supports Ingress and Egress network policies.
Remediation:
To use a CNI plugin with Network Policy, enable Network Policy in GKE, and
the CNI plugin will be updated. See recommendation 5.6.7.
Default Value:
References:
1. https://kubernetes.io/docs/concepts/services-networking/network-
policies/
2. https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-
net/network-plugins/
3. https://cloud.google.com/kubernetes-engine/docs/concepts/network-
overview
Additional Information:
Page 55
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 56
4.3.2 Ensure that all Namespaces have Network Policies defined (Manual)
Profile Applicability:
• Level 2
Description:
Rationale:
Impact:
Once network policies are in use within a given namespace, traffic not
explicitly allowed by a network policy will be denied. As such it is
important to ensure that, when introducing network policies, legitimate
traffic is not blocked.
Audit:
Run the below command and review the NetworkPolicy objects created in the
cluster.
ensure that each namespace defined in the cluster has at least one Network
Policy.
Remediation:
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/network-
policy#creating_a_network_policy
2. https://kubernetes.io/docs/concepts/overview/working-with-
objects/namespaces/
Page 57
3. https://cloud.google.com/kubernetes-engine/docs/concepts/network-
overview
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 58
4.4 Secrets Management
Page 59
4.4.1 Prefer using secrets as files over secrets as environment variables
(Manual)
Profile Applicability:
• Level 2
Description:
Rationale:
Impact:
Audit:
Run the following command to find references to objects which use environment
variables defined from secrets.
Remediation:
Default Value:
References:
1. https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets
Additional Information:
Mounting secrets as volumes has the additional benefit that secret values can
be updated without restarting the pod
Page 60
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
3 Data Protection
v8 Develop processes and technical controls to identify, classify, securely
handle, retain, and dispose of data.
v7 13 Data Protection
Data Protection
Page 61
4.4.2 Consider external secret storage (Manual)
Profile Applicability:
• Level 2
Description:
Consider the use of an external secrets storage and management system instead
of using Kubernetes Secrets directly, if more complex secret management is
required. Ensure the solution requires authentication to access secrets, has
auditing of access to and use of secrets, and encrypts secrets. Some
solutions also make it easier to rotate secrets.
Rationale:
Impact:
None
Audit:
Remediation:
Refer to the secrets management options offered by the cloud service provider
or a third-party secrets management solution.
Default Value:
References:
1. https://kubernetes.io/docs/concepts/configuration/secret/
2. https://cloud.google.com/secret-manager/docs/overview
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
3 Data Protection
v8 Develop processes and technical controls to identify, classify, securely
handle, retain, and dispose of data.
v7 13 Data Protection
Data Protection
Page 62
Page 63
4.5 Extensible Admission Control
Page 64
4.5.1 Configure Image Provenance using ImagePolicyWebhook admission
controller (Manual)
Profile Applicability:
• Level 2
Description:
Rationale:
Impact:
Audit:
Review the pod definitions in the cluster and verify that image provenance is
configured as appropriate.
Also see recommendation 5.10.5.
Remediation:
Default Value:
References:
1. https://kubernetes.io/docs/concepts/containers/images/
2. https://kubernetes.io/docs/reference/access-authn-authz/admission-
controllers/
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 65
Controls
Control IG 1 IG 2 IG 3
Version
Page 66
4.6 General Policies
Page 67
4.6.1 Create administrative boundaries between resources using namespaces
(Manual)
Profile Applicability:
• Level 1
Description:
Rationale:
Limiting the scope of user permissions can reduce the impact of mistakes or
malicious activities. A Kubernetes namespace allows you to partition created
resources into logically named groups. Resources created in one namespace can
be hidden from other namespaces. By default, each resource created by a user
in Kubernetes cluster runs in a default namespace, called default. You can
create additional namespaces and attach resources and users to them. You can
use Kubernetes Authorization plugins to create policies that segregate access
to namespace resources between different users.
Impact:
Audit:
Run the below command and review the namespaces created in the cluster.
Remediation:
Follow the documentation and create namespaces for objects in your deployment
as you need them.
Default Value:
References:
1. https://kubernetes.io/docs/concepts/overview/working-with-
objects/namespaces/#viewing-namespaces
2. http://blog.kubernetes.io/2016/08/security-best-practices-kubernetes-
deployment.html
3. https://github.com/kubernetes/enhancements/tree/master/keps/sig-
node/589-efficient-node-heartbeats
Page 68
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
v7 12 Boundary Defense
Boundary Defense
Page 69
4.6.2 Ensure that the seccomp profile is set to RuntimeDefault in the pod
definitions (Manual)
Profile Applicability:
• Level 2
Description:
Rationale:
Seccomp (secure computing mode) is used to restrict the set of system calls
applications can make, allowing cluster administrators greater control over
the security of workloads running in the cluster. Kubernetes disables seccomp
profiles by default for historical reasons. It should be enabled to ensure
that the workloads have restricted actions available within the container.
Impact:
If the RuntimeDefault seccomp profile is too restrictive for you, you would
have to create/manage your own Localhost seccomp profiles.
Audit:
Review the pod definitions in the cluster. It should create a line as below:
securityContext:
seccompProfile:
type: RuntimeDefault
Remediation:
Use security context to enable the RuntimeDefault seccomp profile in your pod
definitions. An example is as below:
securityContext:
seccompProfile:
type: RuntimeDefault
Default Value:
References:
1. https://kubernetes.io/docs/tutorials/security/seccomp/
2. https://cloud.google.com/kubernetes-engine/docs/concepts/seccomp-in-gke
Page 70
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 71
4.6.3 Apply Security Context to Pods and Containers (Manual)
Profile Applicability:
• Level 2
Description:
Rationale:
A security context defines the operating system security settings (uid, gid,
capabilities, SELinux role, etc..) applied to a container. When designing
containers and pods, make sure that the security context is configured for
pods, containers, and volumes. A security context is a property defined in
the deployment yaml. It controls the security parameters that will be
assigned to the pod/container/volume. There are two levels of security
context: pod level security context, and container level security context.
Impact:
If you incorrectly apply security contexts, there may be issues running the
pods.
Audit:
Review the pod definitions in the cluster and verify that the security
contexts have been defined as appropriate.
Remediation:
Follow the Kubernetes documentation and apply security contexts to your pods.
For a suggested list of security contexts, you may refer to the CIS Google
Container-Optimized OS Benchmark.
Default Value:
References:
1. https://kubernetes.io/docs/concepts/workloads/pods/
2. https://kubernetes.io/docs/concepts/containers/
3. https://kubernetes.io/docs/tasks/configure-pod-container/security-
context/
4. https://learn.cisecurity.org/benchmarks
Page 72
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 73
4.6.4 The default namespace should not be used (Manual)
Profile Applicability:
• Level 2
Description:
Rationale:
Impact:
None
Audit:
Remediation:
Default Value:
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 74
Controls
Control IG 1 IG 2 IG 3
Version
5 Managed services
Page 75
5.1 Image Registry and Image Scanning
Page 76
5.1.1 Ensure Image Vulnerability Scanning is enabled (Automated)
Profile Applicability:
• Level 1
Description:
Rationale:
Impact:
None.
Audit:
1. Go to AR by visiting https://console.cloud.google.com/artifacts
2. Select Settings and check if Vulnerability scanning is Enabled.
Page 77
Using Command Line:
Remediation:
Default Value:
References:
1. https://cloud.google.com/artifact-registry/docs/analysis
2. https://cloud.google.com/artifact-analysis/docs/os-overview
3. https://console.cloud.google.com/marketplace/product/google/containerre
gistry.googleapis.com
Page 78
4. https://cloud.google.com/kubernetes-engine/docs/concepts/about-
configuration-scanning
5. https://containersecurity.googleapis.com
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 79
5.1.2 Minimize user access to Container Image repositories (Manual)
Profile Applicability:
• Level 1
Description:
Note: GCR is now deprecated, see the references for more details.
Restrict user access to GCR or AR, limiting interaction with build images to
only authorized personnel and service accounts.
Rationale:
Weak access control to GCR or AR may allow malicious users to replace built
images with vulnerable or back-doored containers.
Impact:
Care should be taken not to remove access to GCR or AR for accounts that
require this for their operation. Any account granted the Storage Object
Viewer role at the project level can view all objects stored in GCS for the
project.
Audit:
Users may have permissions to use Service Accounts and thus Users could
inherit privileges on the AR repositories. To check the accounts that could
do this:
Note that other privileged project level roles will have the ability to write
and modify AR repositories. Consult the GCP CIS benchmark and IAM
documentation for further reference.
Using Command Line:
gcloud artifacts repositories get-iam-policy <repository-name> --location
<repository-location>
The output of the command will return roles associated with the AR repository
and which members have those roles.
Page 80
For Images Hosted in GCR:
Users may have permissions to use Service Accounts and thus Users could
inherit privileges on the GCR Bucket. To check the accounts that could do
this:
Note that other privileged project level roles will have the ability to write
and modify objects and the GCR bucket. Consult the GCP CIS benchmark and IAM
documentation for further reference.
Using Command Line:
To check GCR bucket specific permissions
gsutil iam get gs://artifacts.<project_id>.appspot.com
The output of the command will return roles associated with the GCR bucket
and which members have those roles.
Additionally, run the following to identify users and service accounts that
hold privileged roles at the project level, and thus inherit these privileges
within the GCR bucket:
Page 81
gcloud projects get-iam-policy <project_id> \
--flatten="bindings[].members" \
--format='table(bindings.members)' \
--filter="bindings.role:roles/iam.serviceAccountUser"
Note that other privileged project level roles will have the ability to write
and modify objects and the GCR bucket. Consult the GCP CIS benchmark and IAM
documentation for further reference.
Remediation:
For a User or Service account with Project level permissions inherited by the
GCR bucket, or the Service Account User Role:
Page 82
4. If required add the Storage Object Viewer role - note with caution that
this permits the account to view all objects stored in GCS for the
project.
•
<type> can be one of the following:
o
user, if the <email_address> is a Google account.
o
serviceAccount, if <email_address> specifies a Service account.
o
<email_address> can be one of the following:
▪ a Google account (for example, someone@example.com).
▪ a Cloud IAM service account.
Default Value:
References:
1. https://cloud.google.com/container-registry/docs/
2. https://cloud.google.com/kubernetes-engine/docs/how-to/service-accounts
3. https://cloud.google.com/kubernetes-engine/docs/how-to/iam
4. https://cloud.google.com/artifact-registry/docs/access-control#grant
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 83
Controls
Control IG 1 IG 2 IG 3
Version
Page 84
5.1.3 Minimize cluster access to read-only for Container Image repositories
(Manual)
Profile Applicability:
• Level 1
Description:
Note: GCR is now deprecated, see the references for more details.
Configure the Cluster Service Account with Artifact Registry Viewer Role to
only allow read-only access to AR repositories. Configure the Cluster Service
Account with Storage Object Viewer Role to only allow read-only access to
GCR.
Rationale:
The Cluster Service Account does not require administrative access to GCR or
AR, only requiring pull access to containers to deploy onto GKE. Restricting
permissions follows the principles of least privilege and prevents
credentials from being abused beyond the required role.
Impact:
A separate dedicated service account may be required for use by build servers
and other robot users pushing or managing container images.
Any account granted the Storage Object Viewer role at the project level can
view all objects stored in GCS for the project.
Audit:
Page 85
1. Go to Storage Browser by visiting
https://console.cloud.google.com/storage/browser
2. From the list of storage buckets, select
artifacts.<project_id>.appspot.com for the GCR bucket
3. Under the Permissions tab, review the role for GKE Service account and
ensure that only the Storage Object Viewer role is set.
Remediation:
Page 86
gcloud artifacts repositories remove-iam-policy-binding <repository> \
--location <repository-location> \
--member='serviceAccount:<email-address>' \
--role='<role-name>'
For an account that inherits access to the bucket through Project level
permissions:
•
<type> can be one of the following:
o
user, if the <email_address> is a Google account.
o
serviceAccount, if <email_address> specifies a Service account.
o
<email_address> can be one of the following:
▪ a Google account (for example, someone@example.com).
▪ a Cloud IAM service account.
Then remove the excessively privileged role (Storage Admin / Storage Object
Admin / Storage Object Creator) using:
Page 87
gsutil iam ch -d <type>:<email_address>:<role>
gs://artifacts.<project_id>.appspot.com
For an account that inherits access to the GCR Bucket through Project level
permissions, modify the Projects IAM policy file accordingly, then upload it
using:
Default Value:
The default permissions for the cluster Service account is dependent on the
initial configuration and IAM policy.
References:
1. https://cloud.google.com/container-registry/docs/
2. https://cloud.google.com/kubernetes-engine/docs/how-to/service-accounts
3. https://cloud.google.com/kubernetes-engine/docs/how-to/iam
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 88
5.1.4 Minimize Container Registries to only those approved (Manual)
Profile Applicability:
• Level 1
Description:
Rationale:
Impact:
Audit:
{
"enabled": true
}
Then assess the contents of the policy:
Page 89
gcloud container binauthz policy export > current-policy.yaml
Ensure that the current policy is not configured to allow all images
(evaluationMode: ALWAYS_ALLOW).
Review the list of admissionWhitelistPatterns for unauthorized container
registries.
cat current-policy.yaml
admissionWhitelistPatterns:
...
defaultAdmissionRule:
evaluationMode: ALWAYS_ALLOW
Remediation:
Default Value:
References:
1. https://cloud.google.com/binary-authorization/docs/policy-yaml-
reference
2. https://cloud.google.com/binary-authorization/docs/setting-up
Page 90
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 91
5.2 Identity and Access Management (IAM)
This section contains recommendations relating to using Cloud IAM with GKE.
Page 92
5.2.1 Ensure GKE clusters are not running using the Compute Engine default
service account (Automated)
Profile Applicability:
• Level 1
Description:
Create and use minimally privileged Service accounts to run GKE cluster nodes
instead of using the Compute Engine default Service account. Unnecessary
permissions could be abused in the case of a node compromise.
Rationale:
Impact:
Audit:
Page 93
To check the permissions allocated to the service account are the minimum
required for cluster operation:
• Logs Writer
• Monitoring Metric Writer
• Monitoring Viewer
•
roles/logging.logWriter
•
roles/monitoring.metricWriter
•
roles/monitoring.viewer
Remediation:
Page 94
8. Click DONE.
Note: The workloads will need to be migrated to the new Node pool, and the
old node pools that use the default service account should be deleted to
complete the remediation.
Using Command Line:
To create a minimally privileged service account:
gcloud iam service-accounts create <node_sa_name> --display-name "GKE Node
Service Account"
export NODE_SA_EMAIL=gcloud iam service-accounts list --format='value(email)'
--filter='displayName:GKE Node Service Account'
Grant the following roles to the service account:
Default Value:
By default, nodes use the Compute Engine default service account when you
create a new cluster.
References:
1. https://cloud.google.com/compute/docs/access/service-
accounts#compute_engine_default_service_account
Page 95
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 96
5.2.2 Prefer using dedicated GCP Service Accounts and Workload Identity
(Manual)
Profile Applicability:
• Level 1
Description:
Rationale:
Impact:
Workload Identity replaces the need to use Metadata Concealment and as such,
the two approaches are incompatible. The sensitive metadata protected by
Metadata Concealment is also protected by Workload Identity.
When Workload Identity is enabled, the Compute Engine default Service account
can not be used. Correspondingly, Workload Identity can't be used with Pods
running in the host network. Workloads may also need to be modified in order
for them to use Workload Identity, as described within:
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
GKE infrastructure pods such as Stackdriver will continue to use the Node's
Service account.
Audit:
Page 97
3. Additionally, click on each Node pool within each cluster to observe
the Node pool Details pane, and ensure that the GKE Metadata Server is
'Enabled'.
workloadIdentityConfig:
identityNamespace:<project_id>.svc.id.goog
For each Node pool, ensure the following is set.
workloadMetadataConfig:
nodeMetadata: GKE_METADATA_SERVER
Each Kubernetes workload requiring Google Cloud API access will need to be
manually audited to ensure that Workload Identity is being used and not some
other method.
Remediation:
Page 98
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/workload-
identity
2. https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-
architecture
3. https://cloud.google.com/kubernetes-engine/docs/how-to/workload-
identity
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 99
5.3 Cloud Key Management Service (Cloud KMS)
This section contains recommendations relating to using Cloud KMS with GKE.
Page 100
5.3.1 Ensure Kubernetes Secrets are encrypted using keys managed in Cloud KMS
(Automated)
Profile Applicability:
• Level 1
Description:
Rationale:
Impact:
To use the Cloud KMS CryptoKey to protect etcd in the cluster, the
'Kubernetes Engine Service Agent' Service account must hold the 'Cloud KMS
CryptoKey Encrypter/Decrypter' role.
Audit:
keyName=projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/c
ryptoKeys/<key_name>]
state=ENCRYPTED
Remediation:
Page 101
• A key ring
• A key
• A GKE service account with Cloud KMS CryptoKey Encrypter/Decrypter role
Page 102
gcloud kms keys add-iam-policy-binding <key_name> --location <location> --
keyring <ring_name> --member serviceAccount:<service_account_name> --role
roles/cloudkms.cryptoKeyEncrypterDecrypter --project <key_project_id>
To create a new cluster with Application-layer Secrets Encryption:
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/encrypting-
secrets
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 103
5.4 Node Metadata
Page 104
5.4.1 Ensure legacy Compute Engine instance metadata APIs are Disabled
(Automated)
Profile Applicability:
• Level 1
Description:
Disable the legacy GCE instance metadata APIs for GKE nodes. Under some
circumstances, these can be used from within a pod to extract the node's
credentials.
Rationale:
The legacy GCE metadata endpoint allows simple HTTP requests to be made
returning sensitive information. To prevent the enumeration of metadata
endpoints and data exfiltration, the legacy metadata endpoint must be
disabled.
Without requiring a custom HTTP header when accessing the legacy GCE metadata
endpoint, a flaw in an application that allows an attacker to trick the code
into retrieving the contents of an attacker-specified web URL could provide a
simple method for enumeration and potential credential exfiltration. By
requiring a custom HTTP header, the attacker needs to exploit an application
flaw that allows them to control the URL and also add custom headers in order
to carry out this attack successfully.
Impact:
Any workloads using the legacy GCE metadata endpoint will no longer be able
to retrieve metadata from the endpoint. Use Workload Identity instead.
Audit:
Page 105
gcloud container clusters describe $CLUSTER_NAME \
--zone $COMPUTE_ZONE \
--format json | jq .nodePools[].config.metadata
For each of the Node pools with the correct setting the output of the above
command returns:
"disable-legacy-endpoints"": ""true"
Remediation:
The legacy GCE metadata endpoint must be disabled upon the cluster or node-
pool creation. For GKE versions 1.12 and newer, the legacy GCE metadata
endpoint is disabled by default.
Using Google Cloud Console:
To update an existing cluster, create a new Node pool with the legacy GCE
metadata endpoint disabled:
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/protecting-
cluster-metadata#disable-legacy-apis
Page 106
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 107
5.4.2 Ensure the GKE Metadata Server is Enabled (Automated)
Profile Applicability:
• Level 1
Description:
Running the GKE Metadata Server prevents workloads from accessing sensitive
instance metadata and facilitates Workload Identity.
Rationale:
Every node stores its metadata on a metadata server. Some of this metadata,
such as kubelet credentials and the VM instance identity token, is sensitive
and should not be exposed to a Kubernetes workload. Enabling the GKE Metadata
server prevents pods (that are not running on the host network) from
accessing this metadata and facilitates Workload Identity.
When unspecified, the default setting allows running pods to have full access
to the node's underlying metadata server.
Impact:
The GKE Metadata Server must be run when using Workload Identity. Because
Workload Identity replaces the need to use Metadata Concealment, the two
approaches are incompatible.
When the GKE Metadata Server and Workload Identity are enabled, unless the
Pod is running on the host network, Pods cannot use the the Compute Engine
default service account.
Workloads may need modification in order for them to use Workload Identity as
described within: https://cloud.google.com/kubernetes-engine/docs/how-
to/workload-identity.
Audit:
{
"nodeMetadata": GKE_METADATA_SERVER
}
Null ({ }) is returned if the GKE Metadata Server is not enabled.
Page 108
Remediation:
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/protecting-
cluster-metadata#concealment
2. https://cloud.google.com/kubernetes-engine/docs/how-to/workload-
identity
3. https://cloud.google.com/kubernetes-engine/docs/concepts/workload-
identity
Page 109
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 110
5.5 Node Configuration and Maintenance
Page 111
5.5.1 Ensure Container-Optimized OS (cos_containerd) is used for GKE node
images (Automated)
Profile Applicability:
• Level 2
Description:
Rationale:
COS is an operating system image for Compute Engine VMs optimized for running
containers. With COS, the containers can be brought up on Google Cloud
Platform quickly, efficiently, and securely.
Using COS as the node image provides the following benefits:
• Run containers out of the box: COS instances come pre-installed with
the container runtime and cloud-init. With a COS instance, the
container can be brought up at the same time as the VM is created, with
no on-host setup required.
• Smaller attack surface: COS has a smaller footprint, reducing the
instance's potential attack surface.
• Locked-down by default: COS instances include a locked-down firewall
and other security settings by default.
Impact:
Audit:
Page 112
gcloud container node-pools describe <node_pool_name> --cluster
<cluster_name> --zone <compute_zone> --format json | jq '.config.imageType'
The output of the above command returns COS, if COS is used for Node images.
Remediation:
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/concepts/using-
containerd
2. https://cloud.google.com/kubernetes-engine/docs/concepts/node-images
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 113
5.5.2 Ensure Node Auto-Repair is enabled for GKE nodes (Automated)
Profile Applicability:
• Level 1
Description:
Nodes in a degraded state are an unknown quantity and so may pose a security
risk.
Rationale:
Kubernetes Engine's node auto-repair feature helps you keep the nodes in the
cluster in a healthy, running state. When enabled, Kubernetes Engine makes
periodic checks on the health state of each node in the cluster. If a node
fails consecutive health checks over an extended time period, Kubernetes
Engine initiates a repair process for that node.
Impact:
Audit:
{
"autoRepair": true
}
Remediation:
Page 114
2. Select the Kubernetes cluster containing the node pool for which auto-
repair is disabled.
3. Select the Node pool by clicking on the name of the pool.
4. Navigate to the Node pool details pane and click EDIT.
5. Under the Management heading, check the Enable auto-repair box.
6. Click SAVE.
7. Repeat steps 2-6 for every cluster and node pool with auto-upgrade
disabled.
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 115
5.5.3 Ensure Node Auto-Upgrade is enabled for GKE nodes (Automated)
Profile Applicability:
• Level 1
Description:
Node auto-upgrade keeps nodes at the current Kubernetes and OS security patch
level to mitigate known vulnerabilities.
Rationale:
Node auto-upgrade helps you keep the nodes in the cluster or node pool up to
date with the latest stable patch version of Kubernetes as well as the
underlying node operating system. Node auto-upgrade uses the same update
mechanism as manual node upgrades.
Node pools with node auto-upgrade enabled are automatically scheduled for
upgrades when a new stable Kubernetes version becomes available. When the
upgrade is performed, the Node pool is upgraded to match the current cluster
master version. From a security perspective, this has the benefit of applying
security updates automatically to the Kubernetes Engine when security fixes
are released.
Impact:
Enabling node auto-upgrade does not cause the nodes to upgrade immediately.
Automatic upgrades occur at regular intervals at the discretion of the
Kubernetes Engine team.
To prevent upgrades occurring during a peak period for the cluster, a
maintenance window should be defined. A maintenance window is a four-hour
timeframe that can be chosen, during which automatic upgrades should occur.
Upgrades can occur on any day of the week, and at any time within the
timeframe. To prevent upgrades from occurring during certain dates, a
maintenance exclusion should be defined. A maintenance exclusion can span
multiple days.
Audit:
Page 116
{
"autoUpgrade": true
}
If node auto-upgrade is disabled, the output of the above command output will
not contain the autoUpgrade entry.
Remediation:
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/concepts/node-auto-
upgrades
2. https://cloud.google.com/kubernetes-engine/docs/how-to/maintenance-
windows-and-exclusions
Additional Information:
Page 117
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 118
5.5.4 When creating New Clusters - Automate GKE version management using
Release Channels (Manual)
Profile Applicability:
• Level 1
Description:
Rationale:
Impact:
Audit:
Page 119
Remediation:
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/concepts/release-
channels
2. https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-
upgrades
3. https://cloud.google.com/kubernetes-engine/docs/how-to/maintenance-
windows-and-exclusions
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 120
Controls
Control IG 1 IG 2 IG 3
Version
Page 121
5.5.5 Ensure Shielded GKE Nodes are Enabled (Automated)
Profile Applicability:
• Level 1
Description:
Shielded GKE Nodes provides verifiable integrity via secure boot, virtual
trusted platform module (vTPM)-enabled measured boot, and integrity
monitoring.
Rationale:
Impact:
After Shielded GKE Nodes is enabled in a cluster, any nodes created in a Node
pool without Shielded GKE Nodes enabled, or created outside of any Node pool,
aren't able to join the cluster.
Shielded GKE Nodes can only be used with Container-Optimized OS (COS), COS
with containerd, and Ubuntu node images.
Audit:
Page 122
{
"enabled": true
}
Remediation:
Note: From version 1.18, clusters will have Shielded GKE nodes enabled by
default.
Using Google Cloud Console:
To update an existing cluster to use Shielded GKE nodes:
Default Value:
Clusters will have Shielded GKE nodes enabled by default, as of version v1.18
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/shielded-gke-
nodes
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 123
Controls
Control IG 1 IG 2 IG 3
Version
Page 124
5.5.6 Ensure Integrity Monitoring for Shielded GKE Nodes is Enabled
(Automated)
Profile Applicability:
• Level 1
Description:
Rationale:
Integrity Monitoring provides active alerting for Shielded GKE nodes which
allows administrators to respond to integrity failures and prevent
compromised nodes from being deployed into the cluster.
Impact:
None.
Audit:
{
"enableIntegrityMonitoring": true
}
Remediation:
Page 125
3. Ensure that the 'Integrity monitoring' checkbox is checked under the
'Shielded options' Heading.
4. Click SAVE.
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/shielded-gke-
nodes
2. https://cloud.google.com/compute/shielded-vm/docs/integrity-monitoring
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 126
5.5.7 Ensure Secure Boot for Shielded GKE Nodes is Enabled (Automated)
Profile Applicability:
• Level 2
Description:
Enable Secure Boot for Shielded GKE Nodes to verify the digital signature of
node boot components.
Rationale:
An attacker may seek to alter boot components to persist malware or root kits
during system initialisation. Secure Boot helps ensure that the system only
runs authentic software by verifying the digital signature of all boot
components, and halting the boot process if signature verification fails.
Impact:
Secure Boot will not permit the use of third-party unsigned kernel modules.
Audit:
{
"enableSecureBoot": true
}
Remediation:
Page 127
Workloads will need to be migrated from existing non-conforming Node pools to
the newly created Node pool, then delete the non-conforming pools.
Using Command Line:
To create a Node pool within the cluster with Secure Boot enabled, run the
following command:
gcloud container node-pools create <node_pool_name> --cluster <cluster_name>
--zone <compute_zone> --shielded-secure-boot
Workloads will need to be migrated from existing non-conforming Node pools to
the newly created Node pool, then delete the non-conforming pools.
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/shielded-gke-
nodes#secure_boot
2. https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-
cluster
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 128
5.6 Cluster Networking
Page 129
5.6.1 Enable VPC Flow Logs and Intranode Visibility (Automated)
Profile Applicability:
• Level 2
Description:
Enable VPC Flow Logs and Intranode Visibility to see pod-level traffic, even
for traffic within a worker node.
Rationale:
Impact:
Enabling it on existing cluster causes the cluster master and the cluster
nodes to restart, which might cause disruption.
Audit:
Remediation:
Page 130
gcloud container clusters update <cluster_name> --enable-intra-node-
visibility
Enable VPC Flow Logs:
Using Google Cloud Console:
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/intranode-
visibility
2. https://cloud.google.com/vpc/docs/using-flow-logs
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 131
Controls
Control IG 1 IG 2 IG 3
Version
Page 132
5.6.2 Ensure use of VPC-native clusters (Automated)
Profile Applicability:
• Level 1
Description:
Create Alias IPs for the node network CIDR range in order to subsequently
configure IP-based policies and firewalling for pods. A cluster that uses
Alias IPs is called a VPC-native cluster.
Rationale:
• Pod IPs are reserved within the network ahead of time, which prevents
conflict with other compute resources.
• The networking layer can perform anti-spoofing checks to ensure that
egress traffic is not sent with arbitrary source IPs.
• Firewall controls for Pods can be applied separately from their nodes.
• Alias IPs allow Pods to directly access hosted services without using a
NAT gateway.
Impact:
You cannot currently migrate an existing cluster that uses routes for Pod
routing to a cluster that uses Alias IPs.
Cluster IPs for internal services remain only available from within the
cluster. If you want to access a Kubernetes Service from within the VPC, but
from outside of the cluster, use an internal load balancer.
Audit:
Page 133
Remediation:
Default Value:
By default, VPC-native (using alias IP) is enabled when you create a new
cluster in the Google Cloud Console, however this is disabled when creating a
new cluster using the gcloud CLI, unless the --enable-ip-alias argument is
specified.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips
2. https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 134
Controls
Control IG 1 IG 2 IG 3
Version
Page 135
5.6.3 Ensure Control Plane Authorized Networks is Enabled (Automated)
Profile Applicability:
• Level 1
Description:
Rationale:
Impact:
Audit:
Page 136
Using Command Line:
To check Master Authorized Networks status for an existing cluster, run the
following command;
gcloud container clusters describe <cluster_name> --zone <compute_zone> --
format json | jq '.masterAuthorizedNetworksConfig'
The output should return
{
"enabled": true
}
if Control Plane Authorized Networks is enabled. If Master Authorized
Networks is disabled, the above command will return null ({ }).
Remediation:
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/authorized-
networks
Page 137
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 138
5.6.4 Ensure clusters are created with Private Endpoint Enabled and Public
Access Disabled (Automated)
Profile Applicability:
• Level 2
Description:
Disable access to the Kubernetes API from outside the node network if it is
not required.
Rationale:
In a private cluster, the master node has two endpoints, a private and public
endpoint. The private endpoint is the internal IP address of the master,
behind an internal load balancer in the master's VPC network. Nodes
communicate with the master using the private endpoint. The public endpoint
enables the Kubernetes API to be accessed from outside the master's VPC
network.
Although Kubernetes API requires an authorized token to perform sensitive
actions, a vulnerability could potentially expose the Kubernetes publically
with unrestricted access. Additionally, an attacker may be able to identify
the current cluster and Kubernetes API version and determine whether it is
vulnerable to an attack. Unless required, disabling public endpoint will help
prevent such threats, and require the attacker to be on the master's VPC
network to perform any attack on the Kubernetes API.
Impact:
Audit:
Page 139
gcloud container clusters describe <cluster_name> --format json | jq
'.endpoint'
The output of the above command returns a private IP address if Private
Endpoint is enabled with Public Access disabled.
Remediation:
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
v7 12 Boundary Defense
Boundary Defense
Page 140
5.6.5 Ensure clusters are created with Private Nodes (Automated)
Profile Applicability:
• Level 1
Description:
Rationale:
Impact:
To enable Private Nodes, the cluster has to also be configured with a private
master IP range and IP Aliasing enabled.
Private Nodes do not have outbound access to the public internet. If you want
to provide outbound Internet access for your private nodes, you can use Cloud
NAT or you can manage your own NAT gateway.
To access Google Cloud APIs and services from private nodes, Private Google
Access needs to be set on Kubernetes Engine Cluster Subnets.
Audit:
Remediation:
Page 141
Using Command Line:
To create a cluster with Private Nodes enabled, include the --enable-private-
nodes flag within the cluster create command:
gcloud container clusters create <cluster_name> --enable-private-nodes
Setting this flag also requires the setting of --enable-ip-alias and --
master-ipv4-cidr=<master_cidr_range>.
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
v7 12 Boundary Defense
Boundary Defense
Page 142
5.6.6 Consider firewalling GKE worker nodes (Manual)
Profile Applicability:
• Level 2
Description:
Reduce the network attack surface of GKE nodes by using Firewalls to restrict
ingress and egress traffic.
Rationale:
Utilizing stringent ingress and egress firewall rules minimizes the ports and
services exposed to an network-based attacker, whilst also restricting egress
routes within or out of the cluster in the event that a compromised component
attempts to form an outbound connection.
Impact:
Audit:
{
"tags": "<tag>",
"serviceaccount": "<service_account>"
"network":
"https://www.googleapis.com/compute/v1/projects/<project_id>/global/networks/
<network>"
}
Then, observe the firewall rules applied to the instance by using the
following command, replacing <tag> and <service_account> as appropriate:
Page 143
gcloud compute firewall-rules list \
--format="table(
name,
network,
direction,
priority,
sourceRanges.list():label=SRC_RANGES,
destinationRanges.list():label=DEST_RANGES,
allowed[].map().firewall_rule().list():label=ALLOW,
denied[].map().firewall_rule().list():label=DENY,
sourceTags.list():label=SRC_TAGS,
sourceServiceAccounts.list():label=SRC_SVC_ACCT,
targetTags.list():label=TARGET_TAGS,
targetServiceAccounts.list():label=TARGET_SVC_ACCT,
disabled
)" \
--filter="targetTags.list():<tag> OR
targetServiceAccounts.list():<service_account>"
Firewall rules may also be applied to a network without specifically
targeting Tags or Service Accounts. These can be observed using the
following, replacing <network> as appropriate:
Remediation:
Page 144
Using Command Line:
Use the following command to generate firewall rules, setting the variables
as appropriate:
gcloud compute firewall-rules create <firewall_rule_name> --network <network>
--priority <priority> --direction <direction> --action <action> --target-tags
<tag> --target-service-accounts <service_account> --source-ranges
<source_cidr_range> --source-tags <source_tags> --source-service-accounts
<source_service_account> --destination-ranges <destination_cidr_range> --
rules <rules>
Default Value:
Every VPC network has two implied firewall rules. These rules exist, but are
not shown in the Cloud Console:
• The implied allow egress rule: An egress rule whose action is allow,
destination is 0.0.0.0/0, and priority is the lowest possible (65535)
lets any instance send traffic to any destination, except for traffic
blocked by GCP. Outbound access may be restricted by a higher priority
firewall rule. Internet access is allowed if no other firewall rules
deny outbound traffic and if the instance has an external IP address or
uses a NAT instance.
• The implied deny ingress rule: An ingress rule whose action is deny,
source is 0.0.0.0/0, and priority is the lowest possible (65535)
protects all instances by blocking incoming traffic to them. Incoming
access may be allowed by a higher priority rule. Note that the default
network includes some additional rules that override this one, allowing
certain types of incoming traffic.
The implied rules cannot be removed, but they have the lowest possible
priorities.
References:
1. https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-
architecture
2. https://cloud.google.com/vpc/docs/using-firewalls
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 145
5.6.7 Ensure Network Policy is Enabled and set as appropriate (Automated)
Profile Applicability:
• Level 1
Description:
Use Network Policy to restrict pod to pod traffic within a cluster and
segregate workloads.
Rationale:
Impact:
Network Policy requires the Network Policy add-on. This add-on is included
automatically when a cluster with Network Policy is created, but for an
existing cluster, needs to be added prior to enabling Network Policy.
Enabling/Disabling Network Policy causes a rolling update of all cluster
nodes, similar to performing a cluster upgrade. This operation is long-
running and will block other operations on the cluster (including delete)
until it has run to completion.
If Network Policy is used, a cluster must have at least 2 nodes of type n1-
standard-1 or higher. The recommended minimum size cluster to run Network
Policy enforcement is 3 n1-standard-1 instances.
Enabling Network Policy enforcement consumes additional resources in nodes.
Specifically, it increases the memory footprint of the kube-system process by
approximately 128MB, and requires approximately 300 millicores of CPU.
Audit:
Page 146
gcloud container clusters describe <cluster_name> --zone <compute_zone> --
format json | jq '.networkPolicy'
The output of the above command should be:
{
"enabled": true
}
if Network Policy is enabled. If Network policy is disabled, the above
command output will return null ({ }).
Remediation:
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 147
Controls
Control IG 1 IG 2 IG 3
Version
Page 148
5.6.8 Ensure use of Google-managed SSL Certificates (Manual)
Profile Applicability:
• Level 2
Description:
Rationale:
Impact:
Google-managed SSL Certificates are less flexible than certificates that are
self obtained and managed. Managed certificates support a single, non-
wildcard domain. Self-managed certificates can support wildcards and multiple
subject alternative names (SANs).
Audit:
"annotations": {
...
"networking.gke.io/managed-certificates": "<example_certificate>"
},
For completeness, run the following command to ensure that the managed
certificate resource exists:
Page 149
Remediation:
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs
2. https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 150
5.7 Logging
Page 151
5.7.1 Ensure Logging and Cloud Monitoring is Enabled (Automated)
Profile Applicability:
• Level 1
Description:
Send logs and metrics to a remote aggregator to mitigate the risk of local
tampering in the event of a breach.
Rationale:
Audit:
Page 152
gcloud container clusters describe <cluster_name> --zone <compute_zone> --
format json | jq '.monitoringService'
The output should return monitoring.googleapis.com if Legacy Stackdriver
Monitoring is Enabled.
Remediation:
Default Value:
References:
1. https://cloud.google.com/stackdriver/docs/solutions/gke/observing
2. https://cloud.google.com/stackdriver/docs/solutions/gke/managing-logs
3. https://cloud.google.com/stackdriver/docs/solutions/gke/installing
Page 153
4. https://cloud.google.com/sdk/gcloud/reference/container/clusters/update
#--logging
5. https://cloud.google.com/sdk/gcloud/reference/container/clusters/update
#--monitoring
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 154
5.7.2 Enable Linux auditd logging (Manual)
Profile Applicability:
• Level 2
Description:
Run the auditd logging daemon to obtain verbose operating system logs from
GKE nodes running Container-Optimized OS (COS).
Rationale:
Auditd logs provide valuable information about the state of the cluster and
workloads, such as error messages, login attempts, and binary executions.
This information can be used to debug issues or to investigate security
incidents.
Impact:
Audit:
Page 155
kubectl get daemonsets -A -o json | jq '.items[] | select
(.spec.template.spec.containers[].image | contains ("gcr.io/stackdriver-
agents/stackdriver-logging-agent"))'| jq '{name: .metadata.name, annotations:
.metadata.annotations."kubernetes.io/description", namespace:
.metadata.namespace, status: .status}'
The above command returns the name, namespace and status of the daemonsets
that use the Stackdriver logging agent. The example auditd logging daemonset
has a description within the annotation as output by the command above:
{
"name": "cos-auditd-logging",
"annotations": "DaemonSet that enables Linux auditd logging on COS nodes.",
"namespace": "cos-auditd",
"status": {...
}
}
Ensure that the status fields return that the daemonset is running as
expected.
Remediation:
curl https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-node-
tools/master/os-audit/cos-auditd-logging.yaml > cos-auditd-logging.yaml
Edit the example manifests if needed. Then, deploy them:
Default Value:
By default, the auditd logging daemonset is not launched when a GKE cluster
is created.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/linux-auditd-
logging
2. https://cloud.google.com/container-optimized-os/docs
Page 156
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 157
5.8 Authentication and Authorization
Page 158
5.8.1 Ensure authentication using Client Certificates is Disabled (Automated)
Profile Applicability:
• Level 1
Description:
Rationale:
Impact:
Audit:
Page 159
gcloud container clusters describe $CLUSTER_NAME \
--zone $COMPUTE_ZONE \
--format json | jq '.masterAuth.clientKey'
The output of the above command returns null ({ }) if the client certificate
has not been issued for the cluster (Client Certificate authentication is
disabled).
Note. Depreciated as of v1.19. For Basic Authentication, Legacy authorization
can be edited for standard clusters but cannot be edited in Autopilot
clusters.
Remediation:
Default Value:
Clusters created from GKE version 1.12 have Basic Authentication and Client
Certificate issuance disabled by default.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-
cluster#restrict_authn_methods
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 160
Page 161
5.8.2 Manage Kubernetes RBAC users with Google Groups for GKE (Manual)
Profile Applicability:
• Level 2
Description:
Cluster Administrators should leverage G Suite Groups and Cloud IAM to assign
Kubernetes user roles to a collection of users, instead of to individual
emails using only Cloud IAM.
Rationale:
On- and off-boarding users is often difficult to automate and prone to error.
Using a single source of truth for user permissions via G Suite Groups
reduces the number of locations that an individual must be off-boarded from,
and prevents users gaining unique permissions sets that increase the cost of
audit.
Impact:
Audit:
Remediation:
Page 162
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/google-groups-
rbac
2. https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-
access-control
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 163
5.8.3 Ensure Legacy Authorization (ABAC) is Disabled (Automated)
Profile Applicability:
• Level 1
Description:
Rationale:
Impact:
Once the cluster has the legacy authorizer disabled, the user must be granted
the ability to create authorization roles using RBAC to ensure that the role-
based access control permissions take effect.
Audit:
Remediation:
Page 164
Using Command Line:
To disable Legacy Authorization for an existing cluster, run the following
command:
gcloud container clusters update <cluster_name> --zone <compute_zone> --no-
enable-legacy-authorization
Default Value:
Kubernetes Engine clusters running GKE version 1.8 and later disable the
legacy authorization system by default, and thus role-based access control
permissions take effect with no special action required.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-
access-control
2. https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-
cluster#leave_abac_disabled_default_for_110
Additional Information:
On clusters running GKE 1.6 or 1.7, Kubernetes Service accounts have full
permissions on the Kubernetes API by default. To ensure that the role-based
access control permissions take effect for a Kubernetes service account, the
cluster must be created or updated with the option --no-enable-legacy-
authorization. This requirement is removed for clusters running GKE version
1.8 or higher.
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 165
5.9 Storage
Page 166
5.9.1 Enable Customer-Managed Encryption Keys (CMEK) for GKE Persistent Disks
(PD) (Manual)
Profile Applicability:
• Level 1
Description:
Rationale:
Impact:
Audit:
Page 167
kubectl get pv -o json | jq '.items[].metadata.name'
For each volume used, check that it is encrypted using a customer managed key
by running the following command:
Remediation:
This cannot be remediated by updating an existing cluster. The node pool must
either be recreated or a new cluster created.
Using Google Cloud Console:
FOR NODE BOOT DISKS:
To create a new node pool:
Page 168
gcloud container clusters create <cluster_name> --disk-type <disk_type> --
boot-disk-kms-key
projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/cryptoKey
s/<key_name>
FOR ATTACHED DISKS:
Follow the instructions detailed at: https://cloud.google.com/kubernetes-
engine/docs/how-to/using-cmek.
Default Value:
Persistent disks are encrypted at rest by default, but are not encrypted
using Customer-Managed Encryption Keys by default. By default, the Compute
Engine Persistent Disk CSI Driver is not provisioned within the cluster.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek
2. https://cloud.google.com/compute/docs/disks/customer-managed-encryption
3. https://cloud.google.com/security/encryption-at-rest/default-
encryption/
4. https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-
volumes
5. https://cloud.google.com/sdk/gcloud/reference/container/node-
pools/create
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 169
5.10 Other Cluster Configurations
Page 170
5.10.1 Ensure Kubernetes Web UI is Disabled (Automated)
Profile Applicability:
• Level 1
Description:
Note: The Kubernetes web UI (Dashboard) does not have admin access by default
in GKE 1.7 and higher. The Kubernetes web UI is disabled by default in GKE
1.10 and higher. In GKE 1.15 and higher, the Kubernetes web UI add-on
KubernetesDashboard is no longer supported as a managed add-on.
The Kubernetes Web UI (Dashboard) has been a historical source of
vulnerability and should only be deployed when necessary.
Rationale:
Impact:
Users will be required to manage cluster resources using the Google Cloud
Console or the command line. These require appropriate permissions. To use
the command line, this requires the installation of the command line client,
kubectl, on the user's device (this is already included in Cloud Shell) and
knowledge of command line operations.
Audit:
Page 171
{
"disabled": true
}
Remediation:
Default Value:
The Kubernetes web UI (Dashboard) does not have admin access by default in
GKE 1.7 and higher. The Kubernetes web UI is disabled by default in GKE 1.10
and higher. In GKE 1.15 and higher, the Kubernetes web UI add-on
KubernetesDashboard is no longer supported as a managed add-on.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-
cluster#disable_kubernetes_dashboard
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 172
5.10.2 Ensure that Alpha clusters are not used for production workloads
(Automated)
Profile Applicability:
• Level 1
Description:
Alpha clusters are not covered by an SLA and are not production-ready.
Rationale:
Alpha clusters are designed for early adopters to experiment with workloads
that take advantage of new features before those features are production-
ready. They have all Kubernetes API features enabled, but are not covered by
the GKE SLA, do not receive security updates, have node auto-upgrade and node
auto-repair disabled, and cannot be upgraded. They are also automatically
deleted after 30 days.
Impact:
Users and workloads will not be able to take advantage of features included
within Alpha clusters.
Audit:
Remediation:
Page 173
3. Note: Within Features in the the CLUSTER section, under the Other
heading, Enable Kubernetes alpha features in this cluster will not be
available by default. It will only be available if the cluster is
created with a Static version for the Control plane version, along with
both Automatically upgrade nodes to the next available version and
Enable auto-repair being checked under the Node pool details for each
node.
4. Configure the other settings as required and click CREATE.
Default Value:
References:
1. https://cloud.google.com/kubernetes-engine/docs/concepts/alpha-clusters
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 174
5.10.3 Consider GKE Sandbox for running untrusted workloads (Manual)
Profile Applicability:
• Level 2
Description:
Rationale:
Impact:
Page 175
Audit:
{
"sandboxType":"gvisor"
}
If there is no sandbox, the above command output will be null ({ }).
The default node pool cannot use GKE Sandbox.
Remediation:
Once a node pool is created, GKE Sandbox cannot be enabled, rather a new node
pool is required. The default node pool (the first node pool in your cluster,
created when the cluster is created) cannot use GKE Sandbox.
Using Google Cloud Console:
Default Value:
Page 176
References:
1. https://cloud.google.com/kubernetes-engine/docs/concepts/sandbox-pods
2. https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools
3. https://cloud.google.com/kubernetes-engine/docs/how-to/sandbox-pods
Additional Information:
The default node pool (the first node pool in your cluster, created when the
cluster is created) cannot use GKE Sandbox.
When using GKE Sandbox, your cluster must have at least two node pools. You
must always have at least one node pool where GKE Sandbox is disabled. This
node pool must contain at least one node, even if all your workloads are
sandboxed.
It is optional but recommended that you enable Stackdriver Logging and
Stackdriver Monitoring, by adding the flag --enable-stackdriver-kubernetes.
gVisor messages are logged.
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 177
5.10.4 Ensure use of Binary Authorization (Automated)
Profile Applicability:
• Level 2
Description:
Rationale:
Impact:
Audit:
Page 178
{
"enabled": true
}
Then, assess the contents of the policy:
cat current-policy.yaml
...
defaultAdmissionRule:
evaluationMode: ALWAYS_ALLOW
Remediation:
Default Value:
Page 179
References:
1. https://cloud.google.com/binary-authorization/docs/setting-up
2. https://cloud.google.com/sdk/gcloud/reference/container/clusters/update
#--binauthz-evaluation-mode
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 180
5.10.5 Enable Cloud Security Command Center (Cloud SCC) (Manual)
Profile Applicability:
• Level 1
Description:
Rationale:
Cloud Security Command Center (Cloud SCC) is the canonical security and data
risk database for GCP. Cloud SCC enables you to understand your security and
data attack surface by providing asset inventory, discovery, search, and
management.
Impact:
None.
Audit:
Remediation:
Note: The Security Command Center Asset APIs have been deprecated, pending
removal on or after 26th June 2024. Cloud Asset Inventory should be used
instead.
Follow the instructions at: https://cloud.google.com/security-command-
center/docs/quickstart-scc-setup.
Default Value:
References:
1. https://cloud.google.com/security-command-center/
2. https://cloud.google.com/security-command-center/docs/quickstart-scc-
setup
Page 181
Additional Information:
Cloud SCC is only available at the organization level. Your GCP projects must
belong to a GCP organization. Should be noted that it is also now deprecated.
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 182
5.10.6 Enable Security Posture (Manual)
Profile Applicability:
• Level 1
Description:
Rationale:
The security posture dashboard provides insights about your workload security
posture at the runtime phase of the software delivery life-cycle.
Impact:
Audit:
Remediation:
Default Value:
GKE security posture has multiple features. Not all are on by default.
Configuration auditing is enabled by default for new standard and autopilot
clusters.
securityPostureConfig: mode: BASIC
References:
1. https://cloud.google.com/kubernetes-engine/docs/concepts/about-
security-posture-dashboard
Page 183
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 184
Appendix: Summary Table
CIS Benchmark Recommendation Set
Correctly
Yes No
3 Worker Nodes
3.1.1 Ensure that the proxy kubeconfig file permissions are set
to 644 or more restrictive (Manual)
4 Policies
Page 185
CIS Benchmark Recommendation Set
Correctly
Yes No
Page 186
CIS Benchmark Recommendation Set
Correctly
Yes No
5 Managed services
5.2.1 Ensure GKE clusters are not running using the Compute
Engine default service account (Automated)
Page 187
CIS Benchmark Recommendation Set
Correctly
Yes No
Page 188
CIS Benchmark Recommendation Set
Correctly
Yes No
5.7 Logging
5.9 Storage
Page 189
CIS Benchmark Recommendation Set
Correctly
Yes No
5.10.2 Ensure that Alpha clusters are not used for production
workloads (Automated)
Page 190
Appendix: CIS Controls v7 IG 1 Mapped
Recommendations
Recommendation Set
Correctly
Yes No
2.1.1 Client certificate authentication should not be used for
users
4.1.1 Ensure that the cluster-admin role is only used where
required
4.1.5 Ensure that default service accounts are not actively
used
4.2.1 Ensure that the cluster enforces Pod Security Standard
Baseline profile or stricter for all namespaces.
4.6.3 Apply Security Context to Pods and Containers
5.1.2 Minimize user access to Container Image repositories
5.2.1 Ensure GKE clusters are not running using the Compute
Engine default service account
5.2.2 Prefer using dedicated GCP Service Accounts and
Workload Identity
5.5.3 Ensure Node Auto-Upgrade is enabled for GKE nodes
5.5.4 When creating New Clusters - Automate GKE version
management using Release Channels
5.6.3 Ensure Control Plane Authorized Networks is Enabled
5.6.7 Ensure Network Policy is Enabled and set as appropriate
5.7.1 Ensure Logging and Cloud Monitoring is Enabled
5.10.1 Ensure Kubernetes Web UI is Disabled
Page 191
Appendix: CIS Controls v7 IG 2 Mapped
Recommendations
Recommendation Set
Correctly
Yes No
2.1.1 Client certificate authentication should not be used for
users
3.1.1 Ensure that the proxy kubeconfig file permissions are set
to 644 or more restrictive
3.1.2 Ensure that the proxy kubeconfig file ownership is set to
root:root
3.1.3 Ensure that the kubelet configuration file has permissions
set to 600
3.1.4 Ensure that the kubelet configuration file ownership is set
to root:root
4.1.1 Ensure that the cluster-admin role is only used where
required
4.1.2 Minimize access to secrets
4.1.3 Minimize wildcard use in Roles and ClusterRoles
4.1.4 Minimize access to create pods
4.1.5 Ensure that default service accounts are not actively
used
4.2.1 Ensure that the cluster enforces Pod Security Standard
Baseline profile or stricter for all namespaces.
4.3.1 Ensure that the CNI in use supports Network Policies
4.3.2 Ensure that all Namespaces have Network Policies
defined
4.6.2 Ensure that the seccomp profile is set to RuntimeDefault
in the pod definitions
4.6.3 Apply Security Context to Pods and Containers
5.1.1 Ensure Image Vulnerability Scanning is enabled
5.1.2 Minimize user access to Container Image repositories
5.1.3 Minimize cluster access to read-only for Container Image
repositories
5.1.4 Minimize Container Registries to only those approved
Page 192
Recommendation Set
Correctly
Yes No
5.2.1 Ensure GKE clusters are not running using the Compute
Engine default service account
5.2.2 Prefer using dedicated GCP Service Accounts and
Workload Identity
5.4.1 Ensure legacy Compute Engine instance metadata APIs
are Disabled
5.4.2 Ensure the GKE Metadata Server is Enabled
5.5.1 Ensure Container-Optimized OS (cos_containerd) is
used for GKE node images
5.5.2 Ensure Node Auto-Repair is enabled for GKE nodes
5.5.3 Ensure Node Auto-Upgrade is enabled for GKE nodes
5.5.4 When creating New Clusters - Automate GKE version
management using Release Channels
5.5.5 Ensure Shielded GKE Nodes are Enabled
5.5.6 Ensure Integrity Monitoring for Shielded GKE Nodes is
Enabled
5.5.7 Ensure Secure Boot for Shielded GKE Nodes is Enabled
5.6.1 Enable VPC Flow Logs and Intranode Visibility
5.6.2 Ensure use of VPC-native clusters
5.6.3 Ensure Control Plane Authorized Networks is Enabled
5.6.7 Ensure Network Policy is Enabled and set as appropriate
5.6.8 Ensure use of Google-managed SSL Certificates
5.7.1 Ensure Logging and Cloud Monitoring is Enabled
5.7.2 Enable Linux auditd logging
5.8.2 Manage Kubernetes RBAC users with Google Groups for
GKE
5.10.1 Ensure Kubernetes Web UI is Disabled
5.10.2 Ensure that Alpha clusters are not used for production
workloads
5.10.3 Consider GKE Sandbox for running untrusted workloads
5.10.4 Ensure use of Binary Authorization
5.10.5 Enable Cloud Security Command Center (Cloud SCC)
5.10.6 Enable Security Posture
Page 193
Page 194
Appendix: CIS Controls v7 IG 3 Mapped
Recommendations
Recommendation Set
Correctly
Yes No
2.1.1 Client certificate authentication should not be used for
users
3.1.1 Ensure that the proxy kubeconfig file permissions are set
to 644 or more restrictive
3.1.2 Ensure that the proxy kubeconfig file ownership is set to
root:root
3.1.3 Ensure that the kubelet configuration file has permissions
set to 600
3.1.4 Ensure that the kubelet configuration file ownership is set
to root:root
4.1.1 Ensure that the cluster-admin role is only used where
required
4.1.2 Minimize access to secrets
4.1.3 Minimize wildcard use in Roles and ClusterRoles
4.1.4 Minimize access to create pods
4.1.5 Ensure that default service accounts are not actively
used
4.1.6 Ensure that Service Account Tokens are only mounted
where necessary
4.2.1 Ensure that the cluster enforces Pod Security Standard
Baseline profile or stricter for all namespaces.
4.3.1 Ensure that the CNI in use supports Network Policies
4.3.2 Ensure that all Namespaces have Network Policies
defined
4.6.2 Ensure that the seccomp profile is set to RuntimeDefault
in the pod definitions
4.6.3 Apply Security Context to Pods and Containers
4.6.4 The default namespace should not be used
5.1.1 Ensure Image Vulnerability Scanning is enabled
5.1.2 Minimize user access to Container Image repositories
Page 195
Recommendation Set
Correctly
Yes No
5.1.3 Minimize cluster access to read-only for Container Image
repositories
5.1.4 Minimize Container Registries to only those approved
5.2.1 Ensure GKE clusters are not running using the Compute
Engine default service account
5.2.2 Prefer using dedicated GCP Service Accounts and
Workload Identity
5.3.1 Ensure Kubernetes Secrets are encrypted using keys
managed in Cloud KMS
5.4.1 Ensure legacy Compute Engine instance metadata APIs
are Disabled
5.4.2 Ensure the GKE Metadata Server is Enabled
5.5.1 Ensure Container-Optimized OS (cos_containerd) is
used for GKE node images
5.5.2 Ensure Node Auto-Repair is enabled for GKE nodes
5.5.3 Ensure Node Auto-Upgrade is enabled for GKE nodes
5.5.4 When creating New Clusters - Automate GKE version
management using Release Channels
5.5.5 Ensure Shielded GKE Nodes are Enabled
5.5.6 Ensure Integrity Monitoring for Shielded GKE Nodes is
Enabled
5.5.7 Ensure Secure Boot for Shielded GKE Nodes is Enabled
5.6.1 Enable VPC Flow Logs and Intranode Visibility
5.6.2 Ensure use of VPC-native clusters
5.6.3 Ensure Control Plane Authorized Networks is Enabled
5.6.6 Consider firewalling GKE worker nodes
5.6.7 Ensure Network Policy is Enabled and set as appropriate
5.6.8 Ensure use of Google-managed SSL Certificates
5.7.1 Ensure Logging and Cloud Monitoring is Enabled
5.7.2 Enable Linux auditd logging
5.8.2 Manage Kubernetes RBAC users with Google Groups for
GKE
5.9.1 Enable Customer-Managed Encryption Keys (CMEK) for
GKE Persistent Disks (PD)
Page 196
Recommendation Set
Correctly
Yes No
5.10.1 Ensure Kubernetes Web UI is Disabled
5.10.2 Ensure that Alpha clusters are not used for production
workloads
5.10.3 Consider GKE Sandbox for running untrusted workloads
5.10.4 Ensure use of Binary Authorization
5.10.5 Enable Cloud Security Command Center (Cloud SCC)
5.10.6 Enable Security Posture
Page 197
Appendix: CIS Controls v7 Unmapped
Recommendations
Recommendation Set
Correctly
Yes No
No unmapped recommendations to CIS Controls v7.0
Page 198
Appendix: CIS Controls v8 IG 1 Mapped
Recommendations
Recommendation Set
Correctly
Yes No
2.1.1 Client certificate authentication should not be used for
users
3.1.1 Ensure that the proxy kubeconfig file permissions are set
to 644 or more restrictive
3.1.2 Ensure that the proxy kubeconfig file ownership is set to
root:root
3.1.3 Ensure that the kubelet configuration file has permissions
set to 600
3.1.4 Ensure that the kubelet configuration file ownership is set
to root:root
4.1.1 Ensure that the cluster-admin role is only used where
required
4.1.2 Minimize access to secrets
4.1.3 Minimize wildcard use in Roles and ClusterRoles
4.1.5 Ensure that default service accounts are not actively
used
4.1.7 Avoid use of system:masters group
4.1.8 Limit use of the Bind, Impersonate and Escalate
permissions in the Kubernetes cluster
4.5.1 Configure Image Provenance using
ImagePolicyWebhook admission controller
5.1.2 Minimize user access to Container Image repositories
5.1.3 Minimize cluster access to read-only for Container Image
repositories
5.2.1 Ensure GKE clusters are not running using the Compute
Engine default service account
5.2.2 Prefer using dedicated GCP Service Accounts and
Workload Identity
5.5.3 Ensure Node Auto-Upgrade is enabled for GKE nodes
Page 199
Recommendation Set
Correctly
Yes No
5.5.4 When creating New Clusters - Automate GKE version
management using Release Channels
5.6.3 Ensure Control Plane Authorized Networks is Enabled
5.6.4 Ensure clusters are created with Private Endpoint
Enabled and Public Access Disabled
5.6.5 Ensure clusters are created with Private Nodes
5.6.6 Consider firewalling GKE worker nodes
5.7.1 Ensure Logging and Cloud Monitoring is Enabled
5.7.2 Enable Linux auditd logging
5.10.4 Ensure use of Binary Authorization
Page 200
Appendix: CIS Controls v8 IG 2 Mapped
Recommendations
Recommendation Set
Correctly
Yes No
2.1.1 Client certificate authentication should not be used for
users
3.1.1 Ensure that the proxy kubeconfig file permissions are set
to 644 or more restrictive
3.1.2 Ensure that the proxy kubeconfig file ownership is set to
root:root
3.1.3 Ensure that the kubelet configuration file has permissions
set to 600
3.1.4 Ensure that the kubelet configuration file ownership is set
to root:root
4.1.1 Ensure that the cluster-admin role is only used where
required
4.1.2 Minimize access to secrets
4.1.3 Minimize wildcard use in Roles and ClusterRoles
4.1.5 Ensure that default service accounts are not actively
used
4.1.6 Ensure that Service Account Tokens are only mounted
where necessary
4.1.7 Avoid use of system:masters group
4.1.8 Limit use of the Bind, Impersonate and Escalate
permissions in the Kubernetes cluster
4.2.1 Ensure that the cluster enforces Pod Security Standard
Baseline profile or stricter for all namespaces.
4.3.1 Ensure that the CNI in use supports Network Policies
4.3.2 Ensure that all Namespaces have Network Policies
defined
4.5.1 Configure Image Provenance using
ImagePolicyWebhook admission controller
4.6.2 Ensure that the seccomp profile is set to RuntimeDefault
in the pod definitions
Page 201
Recommendation Set
Correctly
Yes No
4.6.4 The default namespace should not be used
5.1.1 Ensure Image Vulnerability Scanning is enabled
5.1.2 Minimize user access to Container Image repositories
5.1.3 Minimize cluster access to read-only for Container Image
repositories
5.1.4 Minimize Container Registries to only those approved
5.2.1 Ensure GKE clusters are not running using the Compute
Engine default service account
5.2.2 Prefer using dedicated GCP Service Accounts and
Workload Identity
5.3.1 Ensure Kubernetes Secrets are encrypted using keys
managed in Cloud KMS
5.4.1 Ensure legacy Compute Engine instance metadata APIs
are Disabled
5.4.2 Ensure the GKE Metadata Server is Enabled
5.5.1 Ensure Container-Optimized OS (cos_containerd) is
used for GKE node images
5.5.2 Ensure Node Auto-Repair is enabled for GKE nodes
5.5.3 Ensure Node Auto-Upgrade is enabled for GKE nodes
5.5.4 When creating New Clusters - Automate GKE version
management using Release Channels
5.5.5 Ensure Shielded GKE Nodes are Enabled
5.5.6 Ensure Integrity Monitoring for Shielded GKE Nodes is
Enabled
5.5.7 Ensure Secure Boot for Shielded GKE Nodes is Enabled
5.6.1 Enable VPC Flow Logs and Intranode Visibility
5.6.2 Ensure use of VPC-native clusters
5.6.3 Ensure Control Plane Authorized Networks is Enabled
5.6.4 Ensure clusters are created with Private Endpoint
Enabled and Public Access Disabled
5.6.5 Ensure clusters are created with Private Nodes
5.6.6 Consider firewalling GKE worker nodes
5.6.7 Ensure Network Policy is Enabled and set as appropriate
5.6.8 Ensure use of Google-managed SSL Certificates
Page 202
Recommendation Set
Correctly
Yes No
5.7.1 Ensure Logging and Cloud Monitoring is Enabled
5.7.2 Enable Linux auditd logging
5.9.1 Enable Customer-Managed Encryption Keys (CMEK) for
GKE Persistent Disks (PD)
5.10.1 Ensure Kubernetes Web UI is Disabled
5.10.2 Ensure that Alpha clusters are not used for production
workloads
5.10.3 Consider GKE Sandbox for running untrusted workloads
5.10.4 Ensure use of Binary Authorization
5.10.6 Enable Security Posture
Page 203
Appendix: CIS Controls v8 IG 3 Mapped
Recommendations
Recommendation Set
Correctly
Yes No
2.1.1 Client certificate authentication should not be used for
users
3.1.1 Ensure that the proxy kubeconfig file permissions are set
to 644 or more restrictive
3.1.2 Ensure that the proxy kubeconfig file ownership is set to
root:root
3.1.3 Ensure that the kubelet configuration file has permissions
set to 600
3.1.4 Ensure that the kubelet configuration file ownership is set
to root:root
4.1.1 Ensure that the cluster-admin role is only used where
required
4.1.2 Minimize access to secrets
4.1.3 Minimize wildcard use in Roles and ClusterRoles
4.1.4 Minimize access to create pods
4.1.5 Ensure that default service accounts are not actively
used
4.1.6 Ensure that Service Account Tokens are only mounted
where necessary
4.1.7 Avoid use of system:masters group
4.1.8 Limit use of the Bind, Impersonate and Escalate
permissions in the Kubernetes cluster
4.1.9 Minimize access to create persistent volumes
4.1.10 Minimize access to the proxy sub-resource of nodes
4.1.11 Minimize access to the approval sub-resource of
certificatesigningrequests objects
4.1.12 Minimize access to webhook configuration objects
4.1.13 Minimize access to the service account token creation
4.2.1 Ensure that the cluster enforces Pod Security Standard
Baseline profile or stricter for all namespaces.
Page 204
Recommendation Set
Correctly
Yes No
4.3.1 Ensure that the CNI in use supports Network Policies
4.3.2 Ensure that all Namespaces have Network Policies
defined
4.5.1 Configure Image Provenance using
ImagePolicyWebhook admission controller
4.6.2 Ensure that the seccomp profile is set to RuntimeDefault
in the pod definitions
4.6.4 The default namespace should not be used
5.1.1 Ensure Image Vulnerability Scanning is enabled
5.1.2 Minimize user access to Container Image repositories
5.1.3 Minimize cluster access to read-only for Container Image
repositories
5.1.4 Minimize Container Registries to only those approved
5.2.1 Ensure GKE clusters are not running using the Compute
Engine default service account
5.2.2 Prefer using dedicated GCP Service Accounts and
Workload Identity
5.3.1 Ensure Kubernetes Secrets are encrypted using keys
managed in Cloud KMS
5.4.1 Ensure legacy Compute Engine instance metadata APIs
are Disabled
5.4.2 Ensure the GKE Metadata Server is Enabled
5.5.1 Ensure Container-Optimized OS (cos_containerd) is
used for GKE node images
5.5.2 Ensure Node Auto-Repair is enabled for GKE nodes
5.5.3 Ensure Node Auto-Upgrade is enabled for GKE nodes
5.5.4 When creating New Clusters - Automate GKE version
management using Release Channels
5.5.5 Ensure Shielded GKE Nodes are Enabled
5.5.6 Ensure Integrity Monitoring for Shielded GKE Nodes is
Enabled
5.5.7 Ensure Secure Boot for Shielded GKE Nodes is Enabled
5.6.1 Enable VPC Flow Logs and Intranode Visibility
5.6.2 Ensure use of VPC-native clusters
Page 205
Recommendation Set
Correctly
Yes No
5.6.3 Ensure Control Plane Authorized Networks is Enabled
5.6.4 Ensure clusters are created with Private Endpoint
Enabled and Public Access Disabled
5.6.5 Ensure clusters are created with Private Nodes
5.6.6 Consider firewalling GKE worker nodes
5.6.7 Ensure Network Policy is Enabled and set as appropriate
5.6.8 Ensure use of Google-managed SSL Certificates
5.7.1 Ensure Logging and Cloud Monitoring is Enabled
5.7.2 Enable Linux auditd logging
5.8.1 Ensure authentication using Client Certificates is
Disabled
5.8.2 Manage Kubernetes RBAC users with Google Groups for
GKE
5.8.3 Ensure Legacy Authorization (ABAC) is Disabled
5.9.1 Enable Customer-Managed Encryption Keys (CMEK) for
GKE Persistent Disks (PD)
5.10.1 Ensure Kubernetes Web UI is Disabled
5.10.2 Ensure that Alpha clusters are not used for production
workloads
5.10.3 Consider GKE Sandbox for running untrusted workloads
5.10.4 Ensure use of Binary Authorization
5.10.5 Enable Cloud Security Command Center (Cloud SCC)
5.10.6 Enable Security Posture
Page 206
Appendix: CIS Controls v8 Unmapped
Recommendations
Recommendation Set
Correctly
Yes No
No unmapped recommendations to CIS Controls v8.0
Page 207
Appendix: Change History
Date Version Changes for this version
Page 208
Date Version Changes for this version
Page 209
Date Version Changes for this version
Page 210
Date Version Changes for this version
Page 211
Date Version Changes for this version
Page 212
Date Version Changes for this version
Page 213
Date Version Changes for this version
Page 214
Date Version Changes for this version
Page 215
Date Version Changes for this version
Page 216
Date Version Changes for this version
Page 217
Date Version Changes for this version
Page 218