OpenShift - Container - Platform 4.3 Storage en US
OpenShift - Container - Platform 4.3 Storage en US
Storage
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This document provides instructions for configuring persistent volumes from various storage back
ends and managing dynamic allocation from Pods.
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .UNDERSTANDING
. . . . . . . . . . . . . . . . . . . PERSISTENT
. . . . . . . . . . . . . .STORAGE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . .
1.1. PERSISTENT STORAGE OVERVIEW 5
1.2. LIFECYCLE OF A VOLUME AND CLAIM 5
1.2.1. Provision storage 5
1.2.2. Bind claims 5
1.2.3. Use Pods and claimed PVs 6
1.2.4. Storage Object in Use Protection 6
1.2.5. Release volumes 6
1.2.6. Reclaim volumes 6
1.3. PERSISTENT VOLUMES 7
1.3.1. Types of PVs 7
1.3.2. Capacity 8
1.3.3. Access modes 8
1.3.4. Phase 10
1.3.4.1. Mount options 10
1.4. PERSISTENT VOLUME CLAIMS 11
1.4.1. Storage classes 12
1.4.2. Access modes 12
1.4.3. Resources 12
1.4.4. Claims as volumes 12
1.5. BLOCK VOLUME SUPPORT 13
1.5.1. Block volume examples 14
.CHAPTER
. . . . . . . . . . 2.
. . CONFIGURING
. . . . . . . . . . . . . . . . PERSISTENT
. . . . . . . . . . . . . . STORAGE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
..............
2.1. PERSISTENT STORAGE USING AWS ELASTIC FILE SYSTEM 17
2.1.1. Store the EFS variables in a ConfigMap 17
2.1.2. Configuring authorization for EFS volumes 18
2.1.3. Create the EFS StorageClass 20
2.1.4. Create the EFS provisioner 20
2.1.5. Create the EFS PersistentVolumeClaim 22
2.2. PERSISTENT STORAGE USING AWS ELASTIC BLOCK STORE 23
2.2.1. Creating the EBS Storage Class 23
2.2.2. Creating the Persistent Volume Claim 24
2.2.3. Volume format 24
2.2.4. Maximum Number of EBS Volumes on a Node 24
2.3. PERSISTENT STORAGE USING AZURE 25
2.3.1. Creating the Azure storage class 25
2.3.2. Creating the Persistent Volume Claim 25
2.3.3. Volume format 26
2.4. PERSISTENT STORAGE USING AZURE FILE 26
2.4.1. Create the Azure File share PersistentVolumeClaim 27
2.4.2. Mount the Azure File share in a Pod 28
2.5. PERSISTENT STORAGE USING CINDER 29
2.5.1. Manual provisioning with Cinder 29
2.5.1.1. Creating the persistent volume 29
2.5.1.2. Persistent volume formatting 30
2.5.1.3. Cinder volume security 30
2.6. PERSISTENT STORAGE USING THE CONTAINER STORAGE INTERFACE (CSI) 31
2.6.1. CSI Architecture 31
2.6.1.1. External CSI controllers 32
2.6.1.2. CSI Driver DaemonSet 33
1
OpenShift Container Platform 4.3 Storage
2
Table of Contents
. . . . . . . . . . . 3.
CHAPTER . . EXPANDING
. . . . . . . . . . . . . .PERSISTENT
. . . . . . . . . . . . . .VOLUMES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65
..............
3.1. ENABLING VOLUME EXPANSION SUPPORT 65
3.2. EXPANDING CSI VOLUMES 65
3.3. EXPANDING FLEXVOLUME WITH A SUPPORTED DRIVER 65
3.4. EXPANDING PERSISTENT VOLUME CLAIMS (PVCS) WITH A FILE SYSTEM 66
3.5. RECOVERING FROM FAILURE WHEN EXPANDING VOLUMES 67
.CHAPTER
. . . . . . . . . . 4.
. . .DYNAMIC
. . . . . . . . . . PROVISIONING
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68
..............
4.1. ABOUT DYNAMIC PROVISIONING 68
4.2. AVAILABLE DYNAMIC PROVISIONING PLUG-INS 68
4.3. DEFINING A STORAGECLASS 69
4.3.1. Basic StorageClass object definition 69
4.3.2. StorageClass annotations 70
4.3.3. OpenStack Cinder object definition 70
4.3.4. AWS Elastic Block Store (EBS) object definition 71
4.3.5. Azure Disk object definition 72
4.3.6. Azure File object definition 72
4.3.6.1. Considerations when using Azure File 73
4.3.7. GCE PersistentDisk (gcePD) object definition 74
4.3.8. VMware vSphere object definition 74
4.4. CHANGING THE DEFAULT STORAGECLASS 75
3
OpenShift Container Platform 4.3 Storage
4
CHAPTER 1. UNDERSTANDING PERSISTENT STORAGE
PVCs are specific to a project, and are created and used by developers as a means to use a PV. PV
resources on their own are not scoped to any single project; they can be shared across the entire
OpenShift Container Platform cluster and claimed from any project. After a PV is bound to a PVC, that
PV can not then be bound to additional PVCs. This has the effect of scoping a bound PV to a single
namespace, that of the binding project.
PVs are defined by a PersistentVolume API object, which represents a piece of existing storage in the
cluster that was either statically provisioned by the cluster administrator or dynamically provisioned
using a StorageClass object. It is a resource in the cluster just like a node is a cluster resource.
PVs are volume plug-ins like Volumes but have a lifecycle that is independent of any individual Pod that
uses the PV. PV objects capture the details of the implementation of the storage, be that NFS, iSCSI, or
a cloud-provider-specific storage system.
IMPORTANT
High availability of storage in the infrastructure is left to the underlying storage provider.
PVCs are defined by a PersistentVolumeClaim API object, which represents a request for storage by a
developer. It is similar to a Pod in that Pods consume node resources and PVCs consume PV resources.
For example, Pods can request specific levels of resources, such as CPU and memory, while PVCs can
request specific storage capacity and access modes. For example, they can be mounted once read-
write or many times read-only.
Alternatively, a cluster administrator can create a number of PVs in advance that carry the details of the
real storage that is available for use. PVs exist in the API and are available for use.
The size of all PVs might exceed your PVC size. This is especially true with manually provisioned PVs. To
5
OpenShift Container Platform 4.3 Storage
The size of all PVs might exceed your PVC size. This is especially true with manually provisioned PVs. To
minimize the excess, OpenShift Container Platform binds to the smallest PV that matches all other
criteria.
Claims remain unbound indefinitely if a matching volume does not exist or can not be created with any
available provisioner servicing a storage class. Claims are bound as matching volumes become available.
For example, a cluster with many manually provisioned 50Gi volumes would not match a PVC requesting
100Gi. The PVC can be bound when a 100Gi PV is added to the cluster.
Once you have a claim and that claim is bound, the bound PV belongs to you for as long as you need it.
You can schedule Pods and access claimed PVs by including persistentVolumeClaim in the Pod’s
volumes block.
NOTE
A PVC is in active use by a Pod when a Pod object exists that uses the PVC.
If a user deletes a PVC that is in active use by a Pod, the PVC is not removed immediately. PVC removal
is postponed until the PVC is no longer actively used by any Pods. Also, if a cluster admin deletes a PV
that is bound to a PVC, the PV is not removed immediately. PV removal is postponed until the PV is no
longer bound to a PVC.
Retain reclaim policy allows manual reclamation of the resource for those volume plug-ins that
support it.
Recycle reclaim policy recycles the volume back into the pool of unbound persistent volumes
once it is released from its claim.
IMPORTANT
6
CHAPTER 1. UNDERSTANDING PERSISTENT STORAGE
IMPORTANT
Delete reclaim policy deletes both the PersistentVolume object from OpenShift Container
Platform and the associated storage asset in external infrastructure, such as AWS EBS or
VMware vSphere.
NOTE
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001 1
spec:
capacity:
storage: 5Gi 2
accessModes:
- ReadWriteOnce 3
persistentVolumeReclaimPolicy: Retain 4
...
status:
...
4 The reclaim policy, indicating how the resource should be handled once it is released.
Azure Disk
Azure File
Cinder
7
OpenShift Container Platform 4.3 Storage
Fibre Channel
HostPath
iSCSI
Local volume
NFS
VMware vSphere
1.3.2. Capacity
Generally, a PV has a specific storage capacity. This is set by using the PV’s capacity attribute.
Currently, storage capacity is the only resource that can be set or requested. Future attributes may
include IOPS, throughput, and so on.
Claims are matched to volumes with similar access modes. The only two matching criteria are access
modes and size. A claim’s access modes represent a request. Therefore, you might be granted more, but
never less. For example, if a claim requests RWO, but the only volume available is an NFS PV
(RWO+ROX+RWX), the claim would then match NFS because it supports RWO.
Direct matches are always attempted first. The volume’s modes must match or contain more modes
than you requested. The size must be greater than or equal to what is expected. If two types of volumes,
such as NFS and iSCSI, have the same set of access modes, either of them can match a claim with those
modes. There is no ordering between types of volumes and no way to choose one type over another.
All volumes with the same modes are grouped, and then sorted by size, smallest to largest. The binder
gets the group with matching modes and iterates over each, in size order, until one size matches.
8
CHAPTER 1. UNDERSTANDING PERSISTENT STORAGE
IMPORTANT
A volume’s AccessModes are descriptors of the volume’s capabilities. They are not
enforced constraints. The storage provider is responsible for runtime errors resulting
from invalid use of the resource.
For example, NFS offers ReadWriteOnce access mode. You must mark the claims as
read-only if you want to use the volume’s ROX capability. Errors in the provider show up
at runtime as mount errors.
iSCSI and Fibre Channel volumes do not currently have any fencing mechanisms. You
must ensure the volumes are only used by one node at a time. In certain situations, such
as draining a node, the volumes can be used simultaneously by two nodes. Before draining
the node, first ensure the Pods that use these volumes are deleted.
AWS EBS - -
Azure File
Azure Disk - -
Cinder - -
Fibre Channel -
HostPath - -
iSCSI -
Local volume - -
NFS
9
OpenShift Container Platform 4.3 Storage
VMware vSphere - -
NOTE
Use a recreate deployment strategy for Pods that rely on AWS EBS.
1.3.4. Phase
Volumes can be found in one of the following phases:
Phase Description
Released The claim was deleted, but the resource is not yet reclaimed by the
cluster.
You can view the name of the PVC bound to the PV by running:
$ oc get pv <pv-claim>
You can specify mount options while mounting a PV by using the annotation
volume.beta.kubernetes.io/mount-options.
For example:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
annotations:
volume.beta.kubernetes.io/mount-options: rw,nfsvers=4,noexec 1
spec:
capacity:
10
CHAPTER 1. UNDERSTANDING PERSISTENT STORAGE
storage: 1Gi
accessModes:
- ReadWriteOnce
nfs:
path: /tmp
server: 172.17.0.2
persistentVolumeReclaimPolicy: Retain
claimRef:
name: claim1
namespace: default
1 Specified mount options are used while mounting the PV to the disk.
Azure Disk
Azure File
Cinder
iSCSI
Local volume
NFS
VMware vSphere
NOTE
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim 1
spec:
accessModes:
- ReadWriteOnce 2
resources:
11
OpenShift Container Platform 4.3 Storage
requests:
storage: 8Gi 3
storageClassName: gold 4
status:
...
IMPORTANT
The cluster administrator can also set a default storage class for all PVCs. When a default storage class
is configured, the PVC must explicitly ask for StorageClass or storageClassName annotations set to
"" to be bound to a PV without a storage class.
NOTE
If more than one StorageClass is marked as default, a PVC can only be created if the
storageClassName is explicitly specified. Therefore, only one StorageClass should be
set as the default.
1.4.3. Resources
Claims, such as Pods, can request specific quantities of a resource. In this case, the request is for
storage. The same resource model applies to volumes and claims.
Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the
12
CHAPTER 1. UNDERSTANDING PERSISTENT STORAGE
Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the
Pod by using the claim. The cluster finds the claim in the Pod’s namespace and uses it to get the
PersistentVolume backing the claim. The volume is mounted to the host and into the Pod, for example:
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: dockerfile/nginx
volumeMounts:
- mountPath: "/var/www/html" 1
name: mypd 2
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim 3
Raw block volumes are provisioned by specifying volumeMode: Block in the PV and PVC specification.
IMPORTANT
Pods using raw block volumes must be configured to allow privileged containers.
The following table displays which volume plug-ins support block volumes.
AWS EBS
Azure Disk
13
OpenShift Container Platform 4.3 Storage
Azure File
Cinder
Fibre Channel
GCP
HostPath
iSCSI
Local volume
NFS
VMware vSphere
NOTE
Any of the block volumes that can be provisioned manually, but are not provided as fully
supported, are included as a Technology Preview feature only. Technology Preview
features are not supported with Red Hat production service level agreements (SLAs) and
might not be functionally complete. Red Hat does not recommend using them in
production. These features provide early access to upcoming product features, enabling
customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features,
see https://access.redhat.com/support/offerings/techpreview/.
PV example
apiVersion: v1
kind: PersistentVolume
metadata:
name: block-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
volumeMode: Block 1
persistentVolumeReclaimPolicy: Retain
14
CHAPTER 1. UNDERSTANDING PERSISTENT STORAGE
fc:
targetWWNs: ["50060e801049cfd1"]
lun: 0
readOnly: false
1 volumeMode must be set to Block to indicate that this PV is a raw block volume.
PVC example
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: block-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block 1
resources:
requests:
storage: 10Gi
1 volumeMode must be set to Block to indicate that a raw block PVC is requested.
apiVersion: v1
kind: Pod
metadata:
name: pod-with-block-volume
spec:
containers:
- name: fc-container
image: fedora:26
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
volumeDevices: 1
- name: data
devicePath: /dev/xvda 2
volumes:
- name: data
persistentVolumeClaim:
claimName: block-pvc 3
2 devicePath, instead of mountPath, represents the path to the physical device where the raw block
is mapped to the system.
3 The volume source must be of type persistentVolumeClaim and must match the name of the
PVC as expected.
15
OpenShift Container Platform 4.3 Storage
Value Default
Filesystem Yes
Block No
IMPORTANT
16
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
IMPORTANT
Elastic File System is a Technology Preview feature only. Technology Preview features are
not supported with Red Hat production service level agreements (SLAs) and might not
be functionally complete. Red Hat does not recommend using them in production. These
features provide early access to upcoming product features, enabling customers to test
functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features,
see https://access.redhat.com/support/offerings/techpreview/.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent
storage and gives users a way to request those resources without having any knowledge of the
underlying infrastructure. AWS Elastic Block Store volumes can be provisioned dynamically.
PersistentVolumes are not bound to a single project or namespace; they can be shared across the
OpenShift Container Platform cluster. PersistentVolumeClaims are specific to a project or namespace
and can be requested by users.
Prerequisites
Configure the AWS security groups to allow inbound NFS traffic from the EFS volume’s security
group.
Configure the AWS EFS volume to allow incoming SSH traffic from any host.
Additional references
Amazon EFS
Procedure
1. Define an OpenShift Container Platform ConfigMap that contains the environment variables by
creating a configmap.yaml file that contains following contents:
apiVersion: v1
kind: ConfigMap
metadata:
name: efs-provisioner
17
OpenShift Container Platform 4.3 Storage
data:
file.system.id: <file-system-id> 1
aws.region: <aws-region> 2
provisioner.name: openshift.org/aws-efs 3
dns.name: "" 4
1 Defines the Amazon Web Services (AWS) EFS file system ID.
4 An optional argument that specifies the new DNS name where the EFS volume is located.
If no DNS name is provided, the provisioner will search for the EFS volume at <file-
system-id>.efs.<aws-region>.amazonaws.com.
2. After the file has been configured, create it in your cluster by running the following command:
Procedure
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: efs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: ["security.openshift.io"]
18
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
resources: ["securitycontextconstraints"]
verbs: ["use"]
resourceNames: ["hostmount-anyuid"]
3. Create a file, clusterrolebinding.yaml, that defines a cluster role binding that assigns the
defined role to the service account:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-efs-provisioner
subjects:
- kind: ServiceAccount
name: efs-provisioner
namespace: default 1
roleRef:
kind: ClusterRole
name: efs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
1 The namespace where the EFS provisioner pod will run. If the EFS provisioner is running in
a namespace other than default, this value must be updated.
4. Create a file, role.yaml, that defines a role with the necessary permissions:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-efs-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
5. Create a file, rolebinding.yaml, that defines a role binding that assigns this role to the service
account:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-efs-provisioner
subjects:
- kind: ServiceAccount
name: efs-provisioner
namespace: default 1
roleRef:
kind: Role
name: leader-locking-efs-provisioner
apiGroup: rbac.authorization.k8s.io
1 The namespace where the EFS provisioner pod will run. If the EFS provisioner is running in
a namespace other than default, this value must be updated.
19
OpenShift Container Platform 4.3 Storage
$ oc create -f clusterrole.yaml,clusterrolebinding.yaml,role.yaml,rolebinding.yaml
Procedure
1. Define an OpenShift Container Platform ConfigMap that contains the environment variables by
creating a storageclass.yaml with the following contents:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: aws-efs
provisioner: openshift.org/aws-efs
parameters:
gidMin: "2048" 1
gidMax: "2147483647" 2
gidAllocate: "true" 3
1 An optional argument that defines the minimum group ID (GID) for volume assignments.
The default value is 2048.
2 An optional argument that defines the maximum GID for volume assignments. The default
value is 2147483647.
2. After the file has been configured, create it in your cluster by running the following command:
$ oc create -f storageclass.yaml
Prerequisites
Create a service account that contains the necessary cluster and role permissions.
Configure the Amazon Web Services (AWS) security groups to allow incoming NFS traffic on all
OpenShift Container Platform nodes.
20
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
Configure the AWS EFS volume security groups to allow incoming SSH traffic from all sources.
Procedure
1. Define the EFS provisioner by creating a provisioner.yaml with the following contents:
kind: Pod
apiVersion: v1
metadata:
name: efs-provisioner
spec:
serviceAccount: efs-provisioner
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:latest
env:
- name: PROVISIONER_NAME
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: provisioner.name
- name: FILE_SYSTEM_ID
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: file.system.id
- name: AWS_REGION
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: aws.region
- name: DNS_NAME
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: dns.name
optional: true
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: <file-system-id>.efs.<region>.amazonaws.com 1
path: / 2
1 Contains the DNS name of the EFS volume. This field must be updated for the Pod to
discover the EFS volume.
2 The mount path of the EFS volume. Each persistent volume is created as a separate
subdirectory on the EFS volume. If this EFS volume is used for other projects outside of
OpenShift Container Platform, then it is recommended to create a unique subdirectory
OpenShift Container Platform manually on EFS for the cluster to prevent projects from
accessing another project’s data. Specifying a directory that does not exist results in an
error.
21
OpenShift Container Platform 4.3 Storage
2. After the file has been configured, create it in your cluster by running the following command:
$ oc create -f provisioner.yaml
Prerequisites
Procedure (UI)
1. In the OpenShift Container Platform console, click Storage → Persistent Volume Claims.
2. In the persistent volume claims overview, click Create Persistent Volume Claim.
a. Select the storage class that you created from the list.
c. Select the access mode to determine the read and write access for the created storage
claim.
NOTE
Although you must enter a size, every Pod that access the EFS volume has
unlimited storage. Define a value, such as 1Mi, that will remind you that the
storage size is unlimited.
4. Click Create to create the persistent volume claim and generate a persistent volume.
Procedure (CLI)
1. Alternately, you can define EFS PersistentVolumeClaims by creating a file, pvc.yaml, with the
following contents:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-claim 1
namespace: test-efs
annotations:
volume.beta.kubernetes.io/storage-provisioner: openshift.org/aws-efs
finalizers:
- kubernetes.io/pvc-protection
spec:
accessModes:
- ReadWriteOnce 2
22
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
resources:
requests:
storage: 5Gi 3
storageClassName: aws-efs 4
volumeMode: Filesystem
2 The access mode to determine the read and write access for the created PVC.
2. After the file has been configured, create it in your cluster by running the following command:
$ oc create -f pvc.yaml
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent
storage and gives users a way to request those resources without having any knowledge of the
underlying infrastructure. AWS Elastic Block Store volumes can be provisioned dynamically. Persistent
volumes are not bound to a single project or namespace; they can be shared across the OpenShift
Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be
requested by users.
IMPORTANT
Additional References
Amazon EC2
Procedure
23
OpenShift Container Platform 4.3 Storage
Prerequisites
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift
Container Platform.
Procedure
1. In the OpenShift Container Platform console, click Storage → Persistent Volume Claims.
2. In the persistent volume claims overview, click Create Persistent Volume Claim.
a. Select the storage class created previously from the drop-down menu.
c. Select the access mode. This determines the read and write access for the created storage
claim.
4. Click Create to create the persistent volume claim and generate a persistent volume.
This allows using unformatted AWS volumes as persistent volumes, because OpenShift Container
Platform formats them before the first use.
OpenShift Container Platform can be configured to have a higher limit by setting the environment
variable KUBE_MAX_PD_VOLS. However, AWS requires a particular naming scheme ( AWS Device
Naming) for attached devices, which only supports a maximum of 52 volumes. This limits the number of
volumes that can be attached to a node via OpenShift Container Platform to 52.
24
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
IMPORTANT
High availability of storage in the infrastructure is left to the underlying storage provider.
Additional references
Additional References
Procedure
i. Enter the storage account type. This corresponds to your Azure storage account SKU
tier. Valid options are Premium_LRS, Standard_LRS, StandardSSD_LRS, and
UltraSSD_LRS.
ii. Enter the kind of account. Valid options are shared, dedicated, and managed.
25
OpenShift Container Platform 4.3 Storage
Prerequisites
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift
Container Platform.
Procedure
1. In the OpenShift Container Platform console, click Storage → Persistent Volume Claims.
2. In the persistent volume claims overview, click Create Persistent Volume Claim.
a. Select the storage class created previously from the drop-down menu.
c. Select the access mode. This determines the read and write access for the created storage
claim.
4. Click Create to create the persistent volume claim and generate a persistent volume.
This allows using unformatted Azure volumes as persistent volumes, because OpenShift Container
Platform formats them before the first use.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent
storage and gives users a way to request those resources without having any knowledge of the
underlying infrastructure. Azure File volumes can be provisioned dynamically.
PersistentVolumes are not bound to a single project or namespace; they can be shared across the
OpenShift Container Platform cluster. PersistentVolumeClaims are specific to a project or namespace
and can be requested by users for use in applications.
IMPORTANT
High availability of storage in the infrastructure is left to the underlying storage provider.
Additional references
Azure Files
26
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
Prerequisites
The credentials to access this share, specifically the storage account and key, are available.
Procedure
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "pv0001" 1
spec:
capacity:
storage: "5Gi" 2
accessModes:
- "ReadWriteOnce"
storageClassName: azure-file-sc
azureFile:
secretName: <secret-name> 3
shareName: share-1 4
readOnly: false
3 The name of the Secret that contains the Azure File share credentials.
apiVersion: "v1"
kind: "PersistentVolumeClaim"
27
OpenShift Container Platform 4.3 Storage
metadata:
name: "claim1" 1
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "5Gi" 2
storageClassName: azure-file-sc 3
volumeName: "pv0001" 4
3 The name of the StorageClass that is used to provision the PersistentVolume. Specify the
StorageClass used in the PersistentVolume definition.
4 The name of the existing PersistentVolume that references the Azure File share.
Prerequisites
Procedure
apiVersion: v1
kind: Pod
metadata:
name: pod-name 1
spec:
containers:
...
volumeMounts:
- mountPath: "/data" 2
name: azure-file-share
volumes:
- name: azure-file-share
persistentVolumeClaim:
claimName: claim1 3
2 The path to mount the Azure File share inside the Pod.
28
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
Cinder volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or
namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume
claims are specific to a project or namespace and can be requested by users.
Additional resources
For more information about how OpenStack Block Storage provides persistent block storage
management for virtual hard drives, see OpenStack Cinder.
Prerequisites
Cinder volume ID
You must define your persistent volume (PV) in an object definition before creating it in OpenShift
Container Platform:
Procedure
cinder-persistentvolume.yaml
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "pv0001" 1
spec:
capacity:
storage: "5Gi" 2
accessModes:
- "ReadWriteOnce"
cinder: 3
fsType: "ext3" 4
volumeID: "f37a03aa-6212-4c62-a805-9ce139fab180" 5
1 The name of the volume that is used by persistent volume claims or pods.
29
OpenShift Container Platform 4.3 Storage
4 The file system that is created when the volume is mounted for the first time.
IMPORTANT
Do not change the fstype parameter value after the volume is formatted and
provisioned. Changing this value can result in data loss and Pod failure.
2. Create the object definition file you saved in the previous step.
$ oc create -f cinder-persistentvolume.yaml
You can use unformatted Cinder volumes as PVs because OpenShift Container Platform formats them
before the first use.
Before OpenShift Container Platform mounts the volume and passes it to a container, the system
checks that it contains a file system as specified by the fsType parameter in the PV definition. If the
device is not formatted with the file system, all data from the device is erased and the device is
automatically formatted with the given file system.
If you use Cinder PVs in your application, configure security for their deployment configurations.
Prerequisite
Procedure
2. In your application’s deployment configuration, provide the service account name and
securityContext:
apiVersion: v1
kind: ReplicationController
metadata:
name: frontend-1
spec:
replicas: 1 1
selector: 2
name: frontend
template: 3
30
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
metadata:
labels: 4
name: frontend 5
spec:
containers:
- image: openshift/hello-openshift
name: helloworld
ports:
- containerPort: 8080
protocol: TCP
restartPolicy: Always
serviceAccountName: <service_account> 6
securityContext:
fsGroup: 7777 7
4 The labels on the Pod. They must include labels from the label selector.
IMPORTANT
OpenShift Container Platform does not ship with any CSI drivers. It is recommended to
use the CSI drivers provided by community or storage vendors .
Installation instructions differ by driver, and are found in each driver’s documentation.
Follow the instructions provided by the CSI driver.
OpenShift Container Platform 4.3 supports version 1.1.0 of the CSI specification.
The following diagram provides a high-level overview about the components running in pods in the
OpenShift Container Platform cluster.
31
OpenShift Container Platform 4.3 Storage
It is possible to run multiple CSI drivers for different storage backends. Each driver needs its own
external controllers' deployment and DaemonSet with the driver and CSI registrar.
External CSI Controllers is a deployment that deploys one or more pods with three containers:
An external CSI attacher container translates attach and detach calls from OpenShift Container
Platform to respective ControllerPublish and ControllerUnpublish calls to the CSI driver.
An external CSI provisioner container that translates provision and delete calls from OpenShift
Container Platform to respective CreateVolume and DeleteVolume calls to the CSI driver.
The CSI attacher and CSI provisioner containers communicate with the CSI driver container using UNIX
Domain Sockets, ensuring that no CSI communication leaves the pod. The CSI driver is not accessible
from outside of the pod.
NOTE
attach, detach, provision, and delete operations typically require the CSI driver to use
credentials to the storage backend. Run the CSI controller pods on infrastructure nodes
so the credentials are never leaked to user processes, even in the event of a catastrophic
security breach on a compute node.
NOTE
32
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
NOTE
The external attacher must also run for CSI drivers that do not support third-party attach
or detach operations. The external attacher will not issue any ControllerPublish or
ControllerUnpublish operations to the CSI driver. However, it still must run to implement
the necessary OpenShift Container Platform attachment API.
The CSI driver DaemonSet runs a pod on every node that allows OpenShift Container Platform to
mount storage provided by the CSI driver to the node and use it in user workloads (pods) as persistent
volumes (PVs). The pod with the CSI driver installed contains the following containers:
A CSI driver registrar, which registers the CSI driver into the openshift-node service running on
the node. The openshift-node process running on the node then directly connects with the CSI
driver using the UNIX Domain Socket available on the node.
A CSI driver.
The CSI driver deployed on the node should have as few credentials to the storage backend as possible.
OpenShift Container Platform will only use the node plug-in set of CSI calls such as
NodePublish/NodeUnpublish and NodeStage/NodeUnstage, if these calls are implemented.
Procedure
Create a default storage class that ensures all PVCs that do not require any special storage class
are provisioned by the installed CSI driver.
33
OpenShift Container Platform 4.3 Storage
Prerequisites
Procedure
# oc new-app mysql-persistent
--> Deploying template "openshift/mysql-persistent" to project default
...
# oc get pvc
NAME STATUS VOLUME CAPACITY
ACCESS MODES STORAGECLASS AGE
mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi
RWO cinder 3s
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent
storage and gives users a way to request those resources without having any knowledge of the
underlying infrastructure. PersistentVolumes are not bound to a single project or namespace; they can
be shared across the OpenShift Container Platform cluster. PersistentVolumeClaims are specific to a
project or namespace and can be requested by users.
IMPORTANT
High availability of storage in the infrastructure is left to the underlying storage provider.
Additional references
Fibre Channel
2.7.1. Provisioning
To provision Fibre Channel volumes using the PersistentVolume API the following must be available:
Prerequisites
34
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
fc:
targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 1
lun: 2
fsType: ext4
IMPORTANT
Changing the value of the fstype parameter after the volume has been formatted and
provisioned can result in data loss and pod failure.
Use LUN partitions to enforce disk quotas and size constraints. Each LUN is mapped to a single
PersistentVolume, and unique names must be used for PersistentVolumes.
Enforcing quotas in this way allows the end user to request persistent storage by a specific amount, such
as 10Gi, and be matched with a corresponding volume of equal or greater capacity.
Users request storage with a PersistentVolumeClaim. This claim only lives in the user’s namespace, and
can only be referenced by a pod within that same namespace. Any attempt to access a
PersistentVolume across a namespace causes the pod to fail.
Each Fibre Channel LUN must be accessible by all nodes in the cluster.
To use storage from a back-end that does not have a built-in plug-in, you can extend OpenShift
Container Platform through FlexVolume drivers and provide persistent storage to applications.
Pods interact with FlexVolume drivers through the flexvolume in-tree plugin.
Additional References
35
OpenShift Container Platform 4.3 Storage
Additional References
IMPORTANT
Attach and detach operations are not supported in OpenShift Container Platform for
FlexVolume.
All flexVolume.options.
Some options from flexVolume prefixed by kubernetes.io/, such as fsType and readwrite.
{
"fooServer": "192.168.0.1:1234", 1
"fooVolumeName": "bar",
"kubernetes.io/fsType": "ext4", 2
"kubernetes.io/readwrite": "ro", 3
"kubernetes.io/secret/<key name>": "<key value>", 4
"kubernetes.io/secret/<another key name>": "<another key value>",
}
4 All keys and their values from the secret referenced by flexVolume.secretRef.
OpenShift Container Platform expects JSON data on standard output of the driver. When not specified,
the output describes the result of the operation.
36
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
{
"status": "<Success/Failure/Not supported>",
"message": "<Reason for success/failure>"
}
Exit code of the driver should be 0 for success and 1 for error.
Operations should be idempotent, which means that the mounting of an already mounted volume should
result in a successful operation.
Prerequisites
init
Initializes the driver. It is called during initialization of all nodes.
Arguments: none
mount
Mounts a volume to directory. This can include anything that is necessary to mount the
volume, including finding the device and then mounting the device.
unmount
Unmounts a volume from a directory. This can include anything that is necessary to clean up
the volume after unmounting.
Arguments: <mount-dir>
mountdevice
Mounts a volume’s device to a directory where individual Pods can then bind mount.
This call-out does not pass "secrets" specified in the FlexVolume spec. If your driver requires secrets, do
not implement this call-out.
37
OpenShift Container Platform 4.3 Storage
unmountdevice
Unmounts a volume’s device from a directory.
Arguments: <mount-dir>
All other operations should return JSON with {"status": "Not supported"} and exit code 1.
Procedure
To install the FlexVolume driver:
1. Ensure that the executable file exists on all nodes in the cluster.
For example, to install the FlexVolume driver for the storage foo, place the executable file at:
/etc/kubernetes/kubelet-plugins/volume/exec/openshift.com~foo/foo.
Procedure
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001 1
spec:
capacity:
storage: 1Gi 2
accessModes:
- ReadWriteOnce
flexVolume:
driver: openshift.com/foo 3
fsType: "ext4" 4
secretRef: foo-secret 5
readOnly: true 6
38
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
options: 7
fooServer: 192.168.0.1:1234
fooVolumeName: bar
1 The name of the volume. This is how it is identified through persistent volume claims or from Pods.
This name can be different from the name of the volume on back-end storage.
4 The file system that is present on the volume. This field is optional.
5 The reference to a secret. Keys and values from this secret are provided to the FlexVolume driver
on invocation. This field is optional.
7 The additional options for the FlexVolume driver. In addition to the flags specified by the user in
the options field, the following flags are also passed to the executable:
"fsType":"<FS type>",
"readwrite":"<rw>",
"secret/key1":"<secret1>"
...
"secret/keyN":"<secretN>"
NOTE
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent
storage and gives users a way to request those resources without having any knowledge of the
underlying infrastructure.
Persistent volumes are not bound to a single project or namespace; they can be shared across the
OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace
and can be requested by users.
IMPORTANT
High availability of storage in the infrastructure is left to the underlying storage provider.
Additional references
39
OpenShift Container Platform 4.3 Storage
Procedure
Prerequisites
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift
Container Platform.
Procedure
1. In the OpenShift Container Platform console, click Storage → Persistent Volume Claims.
2. In the persistent volume claims overview, click Create Persistent Volume Claim.
a. Select the storage class created previously from the drop-down menu.
c. Select the access mode. This determines the read and write access for the created storage
claim.
4. Click Create to create the persistent volume claim and generate a persistent volume.
40
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
contains a file system as specified by the fsType parameter in the persistent volume definition. If the
device is not formatted with the file system, all data from the device is erased and the device is
automatically formatted with the given file system.
This allows using unformatted GCE volumes as persistent volumes, because OpenShift Container
Platform formats them before the first use.
IMPORTANT
The cluster administrator must configure Pods to run as privileged. This grants access to
Pods in the same node.
2.10.1. Overview
OpenShift Container Platform supports hostPath mounting for development and testing on a single-
node cluster.
In a production cluster, you would not use hostPath. Instead, a cluster administrator would provision a
network resource, such as a GCE Persistent Disk volume, an NFS share, or an Amazon EBS volume.
Network resources support the use of StorageClasses to set up dynamic provisioning.
Procedure
1. Define the persistent volume (PV). Create a file, pv.yaml, with the PersistentVolume object
definition:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume 1
labels:
type: local
spec:
storageClassName: manual 2
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce 3
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data" 4
41
OpenShift Container Platform 4.3 Storage
4 The configuration file specifies that the volume is at /mnt/data on the cluster’s node.
$ oc create -f pv.yaml
3. Define the persistent volume claim (PVC). Create a file, pvc.yaml, with the
PersistentVolumeClaim object definition:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pvc-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: manual
$ oc create -f pvc.yaml
Prerequisites
Procedure
apiVersion: v1
kind: Pod
metadata:
name: pod-name 1
spec:
containers:
...
securityContext:
42
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
privileged: true 2
volumeMounts:
- mountPath: /data 3
name: hostpath-privileged
...
securityContext: {}
volumes:
- name: hostpath-privileged
persistentVolumeClaim:
claimName: task-pvc-volume 4
3 The path to mount the hostPath share inside the privileged Pod.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent
storage and gives users a way to request those resources without having any knowledge of the
underlying infrastructure.
IMPORTANT
IMPORTANT
When you use iSCSI on Amazon Web Services, you must update the default security
policy to include TCP traffic between nodes on the iSCSI ports. By default, they are ports
860 and 3260.
IMPORTANT
OpenShift assumes that all nodes in the cluster have already configured iSCSI initator, i.e.
have installed iscsi-initiator-utils package and configured their initiator name in
/etc/iscsi/initiatorname.iscsi. See Storage Administration Guide linked above.
2.11.1. Provisioning
Verify that the storage exists in the underlying infrastructure before mounting it as a volume in
OpenShift Container Platform. All that is required for the iSCSI is the iSCSI target portal, a valid iSCSI
Qualified Name (IQN), a valid LUN number, the filesystem type, and the PersistentVolume API.
apiVersion: v1
43
OpenShift Container Platform 4.3 Storage
kind: PersistentVolume
metadata:
name: iscsi-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
iscsi:
targetPortal: 10.16.154.81:3260
iqn: iqn.2014-12.example.server:storage.target00
lun: 0
fsType: 'ext4'
Enforcing quotas in this way allows the end user to request persistent storage by a specific amount (e.g,
10Gi) and be matched with a corresponding volume of equal or greater capacity.
apiVersion: v1
kind: PersistentVolume
metadata:
name: iscsi-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
iscsi:
targetPortal: 10.0.0.1:3260
iqn: iqn.2016-04.test.com:storage.target00
lun: 0
fsType: ext4
chapAuthDiscovery: true 1
chapAuthSession: true 2
secretRef:
name: chap-secret 3
44
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
3 Specify name of Secrets object with user name + password. This Secrets object must be available
in all namespaces that can use the referenced volume.
To specify multi-paths in the pod specification use the portals field. For example:
apiVersion: v1
kind: PersistentVolume
metadata:
name: iscsi-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
iscsi:
targetPortal: 10.0.0.1:3260
portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1
iqn: iqn.2016-04.test.com:storage.target00
lun: 0
fsType: ext4
readOnly: false
apiVersion: v1
kind: PersistentVolume
metadata:
name: iscsi-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
iscsi:
targetPortal: 10.0.0.1:3260
portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260']
45
OpenShift Container Platform 4.3 Storage
iqn: iqn.2016-04.test.com:storage.target00
lun: 0
initiatorName: iqn.2016-04.test.com:custom.iqn 1
fsType: ext4
readOnly: false
Local volumes can be used without manually scheduling Pods to nodes, because the system is aware of
the volume node’s constraints. However, local volumes are still subject to the availability of the
underlying node and are not suitable for all applications.
NOTE
Prerequisites
Procedure
$ oc new-project local-storage
c. Type Local Storage into the filter box to locate the Local Storage Operator.
d. Click Install.
e. On the Create Operator Subscription page, select A specific namespace on the cluster.
Select local-storage from the drop-down menu.
f. Adjust the values for the Update Channel and Approval Strategy to the desired values.
g. Click Subscribe.
3. Once finished, the Local Storage Operator will be listed in the Installed Operators section of
46
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
3. Once finished, the Local Storage Operator will be listed in the Installed Operators section of
the web console.
Prerequisites
Procedure
1. Create the local volume resource. This must define the nodes and paths to the local volumes.
NOTE
Do not use different StorageClass names for the same device. Doing so will
create multiple PVs.
Example: Filesystem
apiVersion: "local.storage.openshift.io/v1"
kind: "LocalVolume"
metadata:
name: "local-disks"
namespace: "local-storage" 1
spec:
nodeSelector: 2
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ip-10-0-140-183
- ip-10-0-158-139
- ip-10-0-164-33
storageClassDevices:
- storageClassName: "local-sc"
volumeMode: Filesystem 3
fsType: xfs 4
devicePaths: 5
- /path/to/device
2 Optional: A node selector containing a list of nodes where the local storage volumes are
attached. This example uses the node host names, obtained from oc get node. If a value is
not defined, then the Local Storage Operator will attempt to find matching disks on all
47
OpenShift Container Platform 4.3 Storage
available nodes.
3 The volume mode, either Filesystem or Block, defining the type of the local volumes.
4 The file system that is created when the local volume is mounted for the first time.
Example: Block
apiVersion: "local.storage.openshift.io/v1"
kind: "LocalVolume"
metadata:
name: "local-disks"
namespace: "local-storage" 1
spec:
nodeSelector: 2
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ip-10-0-136-143
- ip-10-0-140-255
- ip-10-0-144-180
storageClassDevices:
- storageClassName: "localblock-sc"
volumeMode: Block 3
devicePaths: 4
- /dev/xvdg
2 Optional: A node selector containing a list of nodes where the local storage volumes are
attached. This example uses the node host names, obtained from oc get node. If a value is
not defined, then the Local Storage Operator will attempt to find matching disks on all
available nodes.
3 The volume mode, either Filesystem or Block, defining the type of the local volumes.
2. Create the local volume resource in your OpenShift Container Platform cluster, specifying the
file you just created:
$ oc create -f <local-volume>.yaml
3. Verify the provisioner was created, and the corresponding DaemonSets were created:
48
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
Note the desired and current number of DaemonSet processes. If the desired count is 0, it
indicates the label selectors were invalid.
$ oc get pv
Prerequisite
Procedure
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-pvc-name 1
spec:
accessModes:
49
OpenShift Container Platform 4.3 Storage
- ReadWriteOnce
volumeMode: Filesystem 2
resources:
requests:
storage: 100Gi 3
storageClassName: local-sc 4
2. Create the PVC in the OpenShift Container Platform cluster, specifying the file you just
created:
$ oc create -f <local-pvc>.yaml
Prerequisites
Procedure
1. Include the defined claim in the resource’s Spec. The following example declares the PVC inside
a Pod:
apiVersion: v1
kind: Pod
spec:
...
containers:
volumeMounts:
- name: localpvc 1
mountPath: "/data" 2
volumes:
- name: localpvc
persistentVolumeClaim:
claimName: localpvc 3
50
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
2. Create the resource in the OpenShift Container Platform cluster, specifying the file you just
created:
$ oc create -f <local-pod>.yaml
You apply tolerations to the Local Storage Operator Pod through the LocalVolume resource and apply
taints to a node through the node specification. A taint on a node instructs the node to repel all Pods
that do not tolerate the taint. Using a specific taint that is not on other Pods ensures that the Local
Storage Operator Pod can also run on that node.
IMPORTANT
Taints and tolerations consist of a key, value, and effect. As an argument, it is expressed
as key=value:effect. An operator allows you to leave one of these parameters empty.
Prerequisites
Local disks are attached to OpenShift Container Platform nodes with a taint.
Procedure
To configure local volumes for scheduling on tainted nodes:
1. Modify the YAML file that defines the Pod and add the LocalVolume spec, as shown in the
following example:
apiVersion: "local.storage.openshift.io/v1"
kind: "LocalVolume"
metadata:
name: "local-disks"
namespace: "local-storage"
spec:
tolerations:
- key: localstorage 1
operator: Equal 2
value: "localstorage" 3
storageClassDevices:
- storageClassName: "localblock-sc"
volumeMode: Block 4
devicePaths: 5
- /dev/xvdg
Specify the Equal operator to require the key/value parameters to match. If operator is
51
OpenShift Container Platform 4.3 Storage
2 Specify the Equal operator to require the key/value parameters to match. If operator is
'Exists`, the system checks that the key exists and ignores the value. If operator is Equal,
4 The volume mode, either Filesystem or Block, defining the type of the local volumes.
The defined tolerations will be passed to the resulting DaemonSets, allowing the diskmaker and
provisioner Pods to be created for nodes that contain the specified taints.
Occasionally, local volumes must be deleted. While removing the entry in the LocalVolume resource and
deleting the PersistentVolume is typically enough, if you want to re-use the same device path or have it
managed by a different StorageClass, then additional steps are needed.
WARNING
The following procedure involves accessing a node as the root user. Modifying the
state of the node beyond the steps in this procedure could result in cluster
instability.
Prerequisite
WARNING
Procedure
b. Navigate to the lines under devicePaths, and delete any representing unwanted disks.
52
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
$ oc delete pv <pv-name>
$ oc debug node/<node-name>
$ chroot /host
$ cd /mnt/local-storage/<sc-name> 1
$ rm <symlink>
To uninstall the Local Storage Operator, you must remove the Operator and all created resources in the
local-storage project.
WARNING
Uninstalling the Local Storage Operator while local storage PVs are still in use is not
recommended. While the PVs will remain after the Operator’s removal, there might
be indeterminate behavior if the Operator is uninstalled and reinstalled without
removing the PVs and local storage resources.
Prerequisites
Procedure
c. Type Local Storage into the filter box to locate the Local Storage Operator.
d. Click the Options menu at the end of the Local Storage Operator.
3. The PVs created by the Local Storage Operator will remain in the cluster until deleted. Once
these volumes are no longer in use, delete them by running the following command:
$ oc delete pv <pv-name>
Additional resources
2.13.1. Provisioning
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift
Container Platform. To provision NFS volumes, a list of NFS servers and export paths are all that is
required.
Procedure
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001 1
spec:
capacity:
storage: 5Gi 2
accessModes:
- ReadWriteOnce 3
54
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
nfs: 4
path: /tmp 5
server: 172.17.0.2 6
persistentVolumeReclaimPolicy: Retain 7
1 The name of the volume. This is the PV identity in various oc <command> pod
commands.
3 Though this appears to be related to controlling access to the volume, it is actually used
similarly to labels and used to match a PVC to a PV. Currently, no access rules are enforced
based on the accessModes.
4 The volume type being used, in this case the nfs plug-in.
7 The reclaim policy for the PV. This defines what happens to a volume when released.
NOTE
Each NFS volume must be mountable by all schedulable nodes in the cluster.
$ oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv0001 <none> 5Gi RWO Available 31s
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-claim1
spec:
accessModes:
- ReadWriteOnce 1
resources:
requests:
storage: 5Gi 2
1 As mentioned above for PVs, the accessModes do not enforce security, but rather act as
labels to match a PV to a PVC.
55
OpenShift Container Platform 4.3 Storage
$ oc get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-claim1 Bound pv0001 5Gi RWO gp2 2m
Enforcing quotas in this way allows the developer to request persistent storage by a specific amount,
such as 10Gi, and be matched with a corresponding volume of equal or greater capacity.
Developers request NFS storage by referencing either a PVC by name or the NFS volume plug-in
directly in the volumes section of their Pod definition.
The /etc/exports file on the NFS server contains the accessible NFS directories. The target NFS
directory has POSIX owner and group IDs. The OpenShift Container Platform NFS plug-in mounts the
container’s NFS directory with the same POSIX ownership and permissions found on the exported NFS
directory. However, the container is not run with its effective UID equal to the owner of the NFS mount,
which is the desired behavior.
As an example, if the target NFS directory appears on the NFS server as:
$ ls -lZ /opt/nfs -d
drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs
$ id nfsnobody
uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)
Then the container must match SELinux labels, and either run with a UID of 65534, the nfsnobody
owner, or with 5555 in its supplemental groups in order to access the directory.
NOTE
The owner ID of 65534 is used as an example. Even though NFS’s root_squash maps
root, uid 0, to nfsnobody, uid 65534, NFS exports can have arbitrary owner IDs. Owner
65534 is not required for NFS exports.
The recommended way to handle NFS access, assuming it is not an option to change permissions on the
NFS export, is to use supplemental groups. Supplemental groups in OpenShift Container Platform are
used for shared storage, of which NFS is an example. In contrast block storage, such as iSCSI, use the
fsGroup SCC strategy and the fsGroup value in the Pod’s securityContext.
NOTE
56
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
NOTE
Because the group ID on the example target NFS directory is 5555, the Pod can define that group ID
using supplementalGroups under the Pod’s securityContext definition. For example:
spec:
containers:
- name:
...
securityContext: 1
supplementalGroups: [5555] 2
1 securityContext must be defined at the Pod level, not under a specific container.
2 An array of GIDs defined for the Pod. In this case, there is one element in the array. Additional GIDs
would be comma-separated.
Assuming there are no custom SCCs that might satisfy the Pod’s requirements, the Pod likely matches
the restricted SCC. This SCC has the supplementalGroups strategy set to RunAsAny, meaning that
any supplied group ID is accepted without range checking.
As a result, the above Pod passes admissions and is launched. However, if group ID range checking is
desired, a custom SCC is the preferred solution. A custom SCC can be created such that minimum and
maximum group IDs are defined, group ID range checking is enforced, and a group ID of 5555 is allowed.
NOTE
To use a custom SCC, you must first add it to the appropriate service account. For
example, use the default service account in the given project unless another has been
specified on the Pod specification.
User IDs can be defined in the container image or in the Pod definition.
NOTE
In the example target NFS directory shown above, the container needs its UID set to 65534, ignoring
group IDs for the moment, so the following can be added to the Pod definition:
spec:
containers: 1
- name:
...
securityContext:
runAsUser: 65534 2
57
OpenShift Container Platform 4.3 Storage
1 Pods contain a securityContext specific to each container and a Pod’s securityContext which
applies to all containers defined in the Pod.
Assuming the default project and the restricted SCC, the Pod’s requested user ID of 65534 is not
allowed, and therefore the Pod fails. The Pod fails for the following reasons:
All SCCs available to the Pod are examined to see which SCC allows a user ID of 65534. While all
policies of the SCCs are checked, the focus here is on user ID.
Because all available SCCs use MustRunAsRange for their runAsUser strategy, UID range
checking is required.
It is generally considered a good practice not to modify the predefined SCCs. The preferred way to fix
this situation is to create a custom SCC A custom SCC can be created such that minimum and maximum
user IDs are defined, UID range checking is still enforced, and the UID of 65534 is allowed.
NOTE
To use a custom SCC, you must first add it to the appropriate service account. For
example, use the default service account in the given project unless another has been
specified on the Pod specification.
2.13.3.3. SELinux
By default, SELinux does not allow writing from a Pod to a remote NFS server. The NFS volume mounts
correctly, but is read-only.
Prerequisites
The container-selinux package must be installed. This package provides the virt_use_nfs
SELinux boolean.
Procedure
Enable the virt_use_nfs boolean using the following command. The -P option makes this
boolean persistent across reboots.
# setsebool -P virt_use_nfs 1
In order to enable arbitrary container users to read and write the volume, each exported volume on the
NFS server should conform to the following conditions:
58
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
/<example_fs> *(rw,root_squash)
NFSv4
For NFSv3, there are three ports to configure: 2049 (nfs), 20048 (mountd), and 111
(portmapper).
NFSv3
The NFS export and directory must be set up so that they are accessible by the target Pods.
Either set the export to be owned by the container’s primary UID, or supply the Pod group
access using supplementalGroups, as shown in group IDs above.
Once claim to a PVC is deleted, and the PV is released, the PV object should not be reused. Instead, a
new PV should be created with the same basic volume details as the original.
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs1
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.1.1
path: "/"
The user creates PVC1, which binds to nfs1. The user then deletes PVC1, releasing claim to nfs1. This
results in nfs1 being Released. If the administrator wants to make the same NFS share available, they
should create a new PV with the same NFS server details, but a different PV name:
apiVersion: v1
kind: PersistentVolume
59
OpenShift Container Platform 4.3 Storage
metadata:
name: nfs2
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.1.1
path: "/"
Deleting the original PV and re-creating it with the same name is discouraged. Attempting to manually
change the status of a PV from Released to Available causes errors and potential data loss.
Red Hat OpenShift Container Storage provides its own documentation library. The complete set of Red
Hat OpenShift Container Storage documentation identified below is available at
https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/4.2/
If you are looking for Red Hat OpenShift Container See the following Red Hat OpenShift Container
Storage information about… Storage documentation:
What’s new, known issues, notable bug fixes, and Red Hat OpenShift Container Storage 4.2 Release
Technology Previews Notes
60
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
If you are looking for Red Hat OpenShift Container See the following Red Hat OpenShift Container
Storage information about… Storage documentation:
Supported workloads, layouts, hardware and Planning your Red Hat OpenShift Container Storage
software requirements, sizing and scaling 4.2 deployment
recommendations
Deploying Red Hat OpenShift Container Storage 4.2 Deploying Red Hat OpenShift Container Storage 4.2
on an existing OpenShift Container Platform cluster
Managing a Red Hat OpenShift Container Storage Managing Red Hat OpenShift Container Storage 4.2
4.2 cluster
Monitoring a Red Hat OpenShift Container Storage Monitoring Red Hat OpenShift Container Storage
4.2 cluster 4.2
VMware vSphere volumes can be provisioned dynamically. OpenShift Container Platform creates the
disk in vSphere and attaches this disk to the correct image.
The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent
storage and gives users a way to request those resources without having any knowledge of the
underlying infrastructure.
PersistentVolumes are not bound to a single project or namespace; they can be shared across the
OpenShift Container Platform cluster. PersistentVolumeClaims are specific to a project or namespace
and can be requested by users.
Additional references
VMware vSphere
OpenShift Container Platform installs a default StorageClass, named thin, that uses the thin disk
format for provisioning volumes.
Prerequisites
Storage must exist in the underlying infrastructure before it can be mounted as a volume in
61
OpenShift Container Platform 4.3 Storage
Storage must exist in the underlying infrastructure before it can be mounted as a volume in
OpenShift Container Platform.
Procedure
1. In the OpenShift Container Platform console, click Storage → Persistent Volume Claims.
2. In the persistent volume claims overview, click Create Persistent Volume Claim.
c. Select the access mode to determine the read and write access for the created storage
claim.
OpenShift Container Platform installs a default StorageClass, named thin, that uses the thin disk
format for provisioning volumes.
Prerequisites
Storage must exist in the underlying infrastructure before it can be mounted as a volume in
OpenShift Container Platform.
Procedure (CLI)
1. You can define a VMware vSphere PersistentVolumeClaim by creating a file, pvc.yaml, with the
following contents:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc 1
spec:
accessModes:
- ReadWriteOnce 2
resources:
requests:
storage: 1Gi 3
62
CHAPTER 2. CONFIGURING PERSISTENT STORAGE
$ oc create -f pvc.yaml
Prerequisites
Storage must exist in the underlying infrastructure before it can be mounted as a volume in
OpenShift Container Platform.
Procedure
1. Create the virtual machine disks. Virtual machine disks (VMDKs) must be created manually
before statically provisioning VMware vSphere volumes. Use either of the following methods:
Create using vmkfstools. Access ESX through Secure Shell (SSH) and then use following
command to create a VMDK volume:
2. Create a PersistentVolume that references the VMDKs. Create a file, pv.yaml, with the
PersistentVolume object definition:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv 1
spec:
capacity:
storage: 2Gi 2
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
vsphereVolume: 3
volumePath: "[datastore1] volumes/myDisk" 4
fsType: ext4 5
3 The volume type used, with vsphereVolume for vSphere volumes. The label is used to
mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it
is unmounted. The volume type supports VMFS and VSAN datastore.
63
OpenShift Container Platform 4.3 Storage
4 The existing VMDK volume to use. You must enclose the datastore name in square
brackets, [], in the volume definition, as shown previously.
5 The file system type to mount. For example, ext4, xfs, or other file-systems.
IMPORTANT
Changing the value of the fsType parameter after the volume is formatted and
provisioned can result in data loss and Pod failure.
$ oc create -f pv.yaml
Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the
volume contains a file system that is specified by the fsType parameter value in the PersistentVolume
(PV) definition. If the device is not formatted with the file system, all data from the device is erased, and
the device is automatically formatted with the specified file system.
Because OpenShift Container Platform formats them before the first use, you can use unformatted
vSphere volumes as PVs.
64
CHAPTER 3. EXPANDING PERSISTENT VOLUMES
Procedure
Edit the StorageClass and add the allowVolumeExpansion attribute. The following example
demonstrates adding this line at the bottom of the StorageClass’s configuration.
apiVersion: storage.k8s.io/v1
kind: StorageClass
...
parameters:
type: gp2
reclaimPolicy: Delete
allowVolumeExpansion: true 1
OpenShift Container Platform supports CSI volume expansion by default. However, a specific CSI driver
is required.
OpenShift Container Platform does not ship with any CSI drivers. It is recommended to use the CSI
drivers provided by community or storage vendors . Follow the installation instructions provided by the
CSI driver.
OpenShift Container Platform 4.3 supports version 1.1.0 of the CSI specification.
IMPORTANT
For more information about the support scope of Red Hat Technology Preview features,
see https://access.redhat.com/support/offerings/techpreview/.
65
OpenShift Container Platform 4.3 Storage
FlexVolume allows expansion if the driver is set with RequiresFSResize to true. The FlexVolume can be
expanded on Pod restart.
Similar to other volume types, FlexVolume volumes can also be expanded when in use by a Pod.
Prerequisites
Procedure
To use resizing in the FlexVolume plugin, you must implement the ExpandableVolumePlugin
interface using these methods:
RequiresFSResize
If true, updates the capacity directly. If false, calls the ExpandFS method to finish the
filesystem resize.
ExpandFS
If true, calls ExpandFS to resize filesystem after physical volume expansion is done. The
volume driver can also perform physical volume resize together with filesystem resize.
IMPORTANT
Expanding the file system on the node only happens when a new pod is started with the volume.
Prerequisites
Procedure
1. Edit the PVC and request a new size by editing spec.resources.requests. For example, the
following expands the ebs PVC to 8 Gi.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
66
CHAPTER 3. EXPANDING PERSISTENT VOLUMES
name: ebs
spec:
storageClass: "storageClassWithFlagSet"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi 1
2. Once the cloud provider object has finished resizing, the PVC is set to
FileSystemResizePending. The following command is used to check the condition:
3. When the cloud provider object has finished resizing, the persistent volume object reflects the
newly requested size in PersistentVolume.Spec.Capacity. At this point, you can create or
recreate a new pod from the PVC to finish the file system resizing. Once the pod is running, the
newly requested size is available and the FileSystemResizePending condition is removed from
the PVC.
Procedure
1. Mark the persistent volume (PV) that is bound to the PVC with the Retain reclaim policy. This
can be done by editing the PV and changing persistentVolumeReclaimPolicy to Retain.
3. To ensure that the newly created PVC can bind to the PV marked Retain, manually edit the PV
and delete the claimRef entry from the PV specs. This marks the PV as Available.
4. Re-create the PVC in a smaller size, or a size that can be allocated by the underlying storage
provider.
5. Set the volumeName field of the PVC to the name of the PV. This binds the PVC to the
provisioned PV only.
67
OpenShift Container Platform 4.3 Storage
The OpenShift Container Platform persistent volume framework enables this functionality and allows
administrators to provision a cluster with persistent storage. The framework also gives users a way to
request those resources without having any knowledge of the underlying infrastructure.
Many storage types are available for use as persistent volumes in OpenShift Container Platform. While
all of them can be statically provisioned by an administrator, some types of storage are created
dynamically using the built-in provider and plug-in APIs.
AWS Elastic Block Store (EBS) kubernetes.io/aws-ebs For dynamic provisioning when
using multiple clusters in different
zones, tag each node with
Key=kubernetes.io/cluster/<c
luster_name>,Value=
<cluster_id> where
<cluster_name> and
<cluster_id> are unique per
cluster.
68
CHAPTER 4. DYNAMIC PROVISIONING
IMPORTANT
Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or
third-party provider as per the relevant documentation.
IMPORTANT
The following sections describe the basic object definition for a StorageClass and specific examples for
each of the supported plug-in types.
kind: StorageClass 1
apiVersion: storage.k8s.io/v1 2
metadata:
name: gp2 3
annotations: 4
storageclass.kubernetes.io/is-default-class: 'true'
...
provisioner: kubernetes.io/aws-ebs 5
parameters: 6
type: gp2
...
69
OpenShift Container Platform 4.3 Storage
6 (optional) The parameters required for the specific provisioner, this will change from plug-in to
plug-in.
storageclass.kubernetes.io/is-default-class: "true"
For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
...
This enables any Persistent Volume Claim (PVC) that does not specify a specific volume to
automatically be provisioned through the default StorageClass.
NOTE
To set a StorageClass description, add the following annotation to your StorageClass’s metadata:
For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubernetes.io/description: My StorageClass Description
...
70
CHAPTER 4. DYNAMIC PROVISIONING
cinder-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gold
provisioner: kubernetes.io/cinder
parameters:
type: fast 1
availability: nova 2
fsType: ext4 3
2 Availability Zone. If not specified, volumes are generally round-robined across all active zones
where the OpenShift Container Platform cluster has a node.
3 File system that is created on dynamically provisioned volumes. This value is copied to the fsType
field of dynamically provisioned persistent volumes and the file system is created when the volume
is mounted for the first time. The default value is ext4.
aws-ebs-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/aws-ebs
parameters:
type: io1 1
iopsPerGB: "10" 2
encrypted: "true" 3
kmsKeyId: keyvalue 4
fsType: ext4 5
1 (required) Select from io1, gp2, sc1, st1. The default is gp2. See the AWS documentation for valid
Amazon Resource Name (ARN) values.
2 (optional) Only for io1 volumes. I/O operations per second per GiB. The AWS volume plug-in
multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap
is 20,000 IOPS, which is the maximum supported by AWS. See the AWS documentation for further
details.
3 (optional) Denotes whether to encrypt the EBS volume. Valid values are true or false.
4 (optional) The full ARN of the key to use when encrypting the volume. If none is supplied, but
encypted is set to true, then AWS generates a key. See the AWS documentation for a valid ARN
value.
5 (optional) File system that is created on dynamically provisioned volumes. This value is copied to
the fsType field of dynamically provisioned persistent volumes and the file system is created when
the volume is mounted for the first time. The default value is ext4.
71
OpenShift Container Platform 4.3 Storage
the volume is mounted for the first time. The default value is ext4.
azure-advanced-disk-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/azure-disk
parameters:
storageAccount: azure_storage_account_name 1
storageaccounttype: Standard_LRS 2
kind: Dedicated 3
1 Azure storage account name. This must reside in the same resource group as the cluster. If a
storage account is specified, the location is ignored. If a storage account is not specified, a new
storage account gets created in the same resource group as the cluster. If you are specifying a
storageAccount, the value for kind must be Dedicated.
2 Azure storage account SKU tier. Default is empty. Note that Premium VMs can attach both
Standard_LRS and Premium_LRS disks, Standard VMs can only attach Standard_LRS disks,
Managed VMs can only attach managed disks, and unmanaged VMs can only attach unmanaged
disks.
a. If kind is set to Shared, Azure creates all unmanaged disks in a few shared storage
accounts in the same resource group as the cluster.
c. If kind is set to Dedicated and a storageAccount is specified, Azure uses the specified
storage account for the new unmanaged disk in the same resource group as the cluster.
For this to work:
Azure Cloud Provider must have a write access to the storage account.
d. If kind is set to Dedicated and a storageAccount is not specified, Azure creates a new
dedicated storage account for the new unmanaged disk in the same resource group as the
cluster.
Procedure
72
CHAPTER 4. DYNAMIC PROVISIONING
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
# name: system:azure-cloud-provider
name: <persistent-volume-binder-role> 1
rules:
- apiGroups: ['']
resources: ['secrets']
verbs: ['get','create']
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: <azure-file> 1
provisioner: kubernetes.io/azure-file
parameters:
location: eastus 2
skuName: Standard_LRS 3
storageAccount: <storage-account> 4
reclaimPolicy: Delete
volumeBindingMode: Immediate
2 Location of the Azure storage account, such as eastus. Default is empty, meaning that a
new Azure storage account will be created in the OpenShift Container Platform cluster’s
location.
3 SKU tier of the Azure storage account, such as Standard_LRS. Default is empty, meaning
that a new Azure storage account will be created with the Standard_LRS SKU.
4 Name of the Azure storage account. If a storage account is provided, then skuName and
location are ignored. If no storage account is provided, then the StorageClass searches for
any storage account that is associated with the resource group for any accounts that
match the defined skuName and location.
The following file system features are not supported by the default Azure File StorageClass:
Symlinks
73
OpenShift Container Platform 4.3 Storage
Hard links
Extended attributes
Sparse files
Named pipes
Additionally, the owner user identifier (UID) of the Azure File mounted directory is different from the
process UID of the container. The uid mount option can be specified in the StorageClass to define a
specific user identifier to use for the mounted directory.
The following StorageClass demonstrates modifying the user and group identifier, along with enabling
symlinks for the mounted directory.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azure-file
mountOptions:
- uid=1500 1
- gid=1500 2
- myfsymlinks 3
provisioner: kubernetes.io/azure-file
parameters:
location: eastus
skuName: Standard_LRS
reclaimPolicy: Delete
volumeBindingMode: Immediate
3 Enables symlinks.
gce-pd-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard 1
replication-type: none
74
CHAPTER 4. DYNAMIC PROVISIONING
vsphere-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume 1
parameters:
diskformat: thin 2
1 For more information about using VMware vSphere with OpenShift Container Platform, see the
VMware vSphere documentation.
2 diskformat: thin, zeroedthick and eagerzeroedthick are all valid disk formats. See vSphere docs
for additional details regarding the disk format types. The default value is thin.
$ oc get storageclass
NAME TYPE
gp2 (default) kubernetes.io/aws-ebs 1
standard kubernetes.io/aws-ebs
$ oc get storageclass
NAME TYPE
gp2 kubernetes.io/aws-ebs
standard (default) kubernetes.io/aws-ebs
75
OpenShift Container Platform 4.3 Storage
76