OpenShift Container Platform-4.4-Installing On AWS-en-US
OpenShift Container Platform-4.4-Installing On AWS-en-US
Installing on AWS
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This document provides instructions for installing and uninstalling OpenShift Container Platform 4.4
clusters on AWS.
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .INSTALLING
. . . . . . . . . . . . . ON
. . . . AWS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . .
1.1. CONFIGURING AN AWS ACCOUNT 5
1.1.1. Configuring Route53 5
1.1.2. AWS account limits 5
1.1.3. Required AWS permissions 7
1.1.4. Creating an IAM user 14
1.1.5. Supported AWS regions 15
1.2. INSTALLING A CLUSTER QUICKLY ON AWS 16
1.2.1. Internet and Telemetry access for OpenShift Container Platform 16
1.2.2. Generating an SSH private key and adding it to the agent 17
1.2.3. Obtaining the installation program 18
1.2.4. Deploying the cluster 18
1.2.5. Installing the CLI by downloading the binary 20
1.2.6. Logging in to the cluster 21
1.3. INSTALLING A CLUSTER ON AWS WITH CUSTOMIZATIONS 21
1.3.1. Internet and Telemetry access for OpenShift Container Platform 22
1.3.2. Generating an SSH private key and adding it to the agent 22
1.3.3. Obtaining the installation program 23
1.3.4. Creating the installation configuration file 24
1.3.4.1. Installation configuration parameters 25
1.3.4.2. Sample customized install-config.yaml file for AWS 29
1.3.5. Deploying the cluster 31
1.3.6. Installing the CLI by downloading the binary 32
1.3.7. Logging in to the cluster 33
1.4. INSTALLING A CLUSTER ON AWS WITH NETWORK CUSTOMIZATIONS 34
1.4.1. Internet and Telemetry access for OpenShift Container Platform 34
1.4.2. Generating an SSH private key and adding it to the agent 35
1.4.3. Obtaining the installation program 36
1.4.4. Creating the installation configuration file 36
1.4.4.1. Installation configuration parameters 38
1.4.4.2. Network configuration parameters 42
1.4.4.3. Sample customized install-config.yaml file for AWS 43
1.4.5. Modifying advanced network configuration parameters 45
1.4.6. Cluster Network Operator configuration 46
1.4.6.1. Configuration parameters for the OpenShift SDN network provider 47
1.4.6.2. Configuration parameters for the OVN-Kubernetes network provider 48
1.4.6.3. Cluster Network Operator example configuration 48
1.4.7. Deploying the cluster 49
1.4.8. Installing the CLI by downloading the binary 50
1.4.9. Logging in to the cluster 50
1.5. INSTALLING A CLUSTER ON AWS INTO AN EXISTING VPC 51
1.5.1. About using a custom VPC 51
1.5.1.1. Requirements for using your VPC 52
1.5.1.2. VPC validation 54
1.5.1.3. Division of permissions 54
1.5.1.4. Isolation between clusters 54
1.5.2. Internet and Telemetry access for OpenShift Container Platform 55
1.5.3. Generating an SSH private key and adding it to the agent 55
1.5.4. Obtaining the installation program 56
1.5.5. Creating the installation configuration file 57
1.5.5.1. Installation configuration parameters 58
1
OpenShift Container Platform 4.4 Installing on AWS
2
Table of Contents
3
OpenShift Container Platform 4.4 Installing on AWS
4
CHAPTER 1. INSTALLING ON AWS
Procedure
1. Identify your domain, or subdomain, and registrar. You can transfer an existing domain and
registrar or obtain a new one through AWS or another source.
NOTE
If you purchase a new domain through AWS, it takes time for the relevant DNS
changes to propagate. For more information about purchasing domains through
AWS, see Registering Domain Names Using Amazon Route 53 in the AWS
documentation.
2. If you are using an existing domain and registrar, migrate its DNS to AWS. See Making Amazon
Route 53 the DNS Service for an Existing Domain in the AWS documentation.
3. Create a public hosted zone for your domain or subdomain. See Creating a Public Hosted Zone
in the AWS documentation.
Use an appropriate root domain, such as openshiftcorp.com, or subdomain, such as
clusters.openshiftcorp.com.
4. Extract the new authoritative name servers from the hosted zone records. See Getting the
Name Servers for a Public Hosted Zone in the AWS documentation.
5. Update the registrar records for the AWS Route53 name servers that your domain uses. For
example, if you registered your domain to a Route53 service in a different accounts, see the
following topic in the AWS documentation: Adding or Changing Name Servers or Glue Records.
6. If you use a subdomain, follow your company’s procedures to add its delegation records to the
parent domain.
The following table summarizes the AWS components whose limits can impact your ability to install and
run OpenShift Container Platform clusters.
5
OpenShift Container Platform 4.4 Installing on AWS
IMPORTANT
6
CHAPTER 1. INSTALLING ON AWS
NAT 5 5 per availability The cluster deploys one NAT gateway in each
Gateways zone availability zone.
Elastic At least 12 350 per region The default installation creates 21 ENIs and an ENI
Network for each availability zone in your region. For
Interfaces example, the us-east-1 region contains six
(ENIs) availability zones, so a cluster that is deployed in
that zone uses 27 ENIs. Review the AWS region map
to determine how many availability zones are in each
region.
VPC 20 20 per account Each cluster creates a single VPC Gateway for S3
Gateway access.
Security 250 2,500 per Each cluster creates 10 distinct security groups.
Groups account
ec2:AllocateAddress
ec2:AssociateAddress
ec2:AuthorizeSecurityGroupEgress
7
OpenShift Container Platform 4.4 Installing on AWS
ec2:AuthorizeSecurityGroupIngress
ec2:CopyImage
ec2:CreateNetworkInterface
ec2:AttachNetworkInterface
ec2:CreateSecurityGroup
ec2:CreateTags
ec2:CreateVolume
ec2:DeleteSecurityGroup
ec2:DeleteSnapshot
ec2:DeregisterImage
ec2:DescribeAccountAttributes
ec2:DescribeAddresses
ec2:DescribeAvailabilityZones
ec2:DescribeDhcpOptions
ec2:DescribeImages
ec2:DescribeInstanceAttribute
ec2:DescribeInstanceCreditSpecifications
ec2:DescribeInstances
ec2:DescribeInternetGateways
ec2:DescribeKeyPairs
ec2:DescribeNatGateways
ec2:DescribeNetworkAcls
ec2:DescribeNetworkInterfaces
ec2:DescribePrefixLists
ec2:DescribeRegions
ec2:DescribeRouteTables
ec2:DescribeSecurityGroups
ec2:DescribeSubnets
ec2:DescribeTags
8
CHAPTER 1. INSTALLING ON AWS
ec2:DescribeVolumes
ec2:DescribeVpcAttribute
ec2:DescribeVpcClassicLink
ec2:DescribeVpcClassicLinkDnsSupport
ec2:DescribeVpcEndpoints
ec2:DescribeVpcs
ec2:ModifyInstanceAttribute
ec2:ModifyNetworkInterfaceAttribute
ec2:ReleaseAddress
ec2:RevokeSecurityGroupEgress
ec2:RevokeSecurityGroupIngress
ec2:RunInstances
ec2:TerminateInstances
ec2:AssociateDhcpOptions
ec2:AssociateRouteTable
ec2:AttachInternetGateway
ec2:CreateDhcpOptions
ec2:CreateInternetGateway
ec2:CreateNatGateway
ec2:CreateRoute
ec2:CreateRouteTable
ec2:CreateSubnet
ec2:CreateVpc
ec2:CreateVpcEndpoint
ec2:ModifySubnetAttribute
ec2:ModifyVpcAttribute
NOTE
9
OpenShift Container Platform 4.4 Installing on AWS
NOTE
If you use an existing VPC, your account does not require these permissions for creating
network resources.
elasticloadbalancing:AddTags
elasticloadbalancing:ApplySecurityGroupsToLoadBalancer
elasticloadbalancing:AttachLoadBalancerToSubnets
elasticloadbalancing:ConfigureHealthCheck
elasticloadbalancing:CreateListener
elasticloadbalancing:CreateLoadBalancer
elasticloadbalancing:CreateLoadBalancerListeners
elasticloadbalancing:CreateTargetGroup
elasticloadbalancing:DeleteLoadBalancer
elasticloadbalancing:DeregisterInstancesFromLoadBalancer
elasticloadbalancing:DeregisterTargets
elasticloadbalancing:DescribeInstanceHealth
elasticloadbalancing:DescribeListeners
elasticloadbalancing:DescribeLoadBalancerAttributes
elasticloadbalancing:DescribeLoadBalancers
elasticloadbalancing:DescribeTags
elasticloadbalancing:DescribeTargetGroupAttributes
elasticloadbalancing:DescribeTargetHealth
elasticloadbalancing:ModifyLoadBalancerAttributes
elasticloadbalancing:ModifyTargetGroup
elasticloadbalancing:ModifyTargetGroupAttributes
elasticloadbalancing:RegisterInstancesWithLoadBalancer
elasticloadbalancing:RegisterTargets
elasticloadbalancing:SetLoadBalancerPoliciesOfListener
10
CHAPTER 1. INSTALLING ON AWS
iam:AddRoleToInstanceProfile
iam:CreateInstanceProfile
iam:CreateRole
iam:DeleteInstanceProfile
iam:DeleteRole
iam:DeleteRolePolicy
iam:GetInstanceProfile
iam:GetRole
iam:GetRolePolicy
iam:GetUser
iam:ListInstanceProfilesForRole
iam:ListRoles
iam:ListUsers
iam:PassRole
iam:PutRolePolicy
iam:RemoveRoleFromInstanceProfile
iam:SimulatePrincipalPolicy
iam:TagRole
route53:ChangeResourceRecordSets
route53:ChangeTagsForResource
route53:CreateHostedZone
route53:DeleteHostedZone
route53:GetChange
route53:GetHostedZone
route53:ListHostedZones
route53:ListHostedZonesByName
route53:ListResourceRecordSets
route53:ListTagsForResource
11
OpenShift Container Platform 4.4 Installing on AWS
route53:UpdateHostedZoneComment
s3:CreateBucket
s3:DeleteBucket
s3:GetAccelerateConfiguration
s3:GetBucketCors
s3:GetBucketLocation
s3:GetBucketLogging
s3:GetBucketObjectLockConfiguration
s3:GetBucketReplication
s3:GetBucketRequestPayment
s3:GetBucketTagging
s3:GetBucketVersioning
s3:GetBucketWebsite
s3:GetEncryptionConfiguration
s3:GetLifecycleConfiguration
s3:GetReplicationConfiguration
s3:ListBucket
s3:PutBucketAcl
s3:PutBucketTagging
s3:PutEncryptionConfiguration
s3:DeleteObject
s3:GetObject
s3:GetObjectAcl
s3:GetObjectTagging
s3:GetObjectVersion
s3:PutObject
s3:PutObjectAcl
12
CHAPTER 1. INSTALLING ON AWS
s3:PutObjectTagging
autoscaling:DescribeAutoScalingGroups
ec2:DeleteNetworkInterface
ec2:DeleteVolume
elasticloadbalancing:DeleteTargetGroup
elasticloadbalancing:DescribeTargetGroups
iam:ListInstanceProfiles
iam:ListRolePolicies
iam:ListUserPolicies
s3:DeleteObject
tag:GetResources
ec2:DeleteDhcpOptions
ec2:DeleteInternetGateway
ec2:DeleteNatGateway
ec2:DeleteRoute
ec2:DeleteRouteTable
ec2:DeleteSubnet
ec2:DeleteVpc
ec2:DeleteVpcEndpoints
ec2:DetachInternetGateway
ec2:DisassociateRouteTable
ec2:ReplaceRouteTableAssociation
NOTE
If you use an existing VPC, your account does not require these permissions to delete
network resources.
iam:CreateAccessKey
13
OpenShift Container Platform 4.4 Installing on AWS
iam:CreateUser
iam:DeleteAccessKey
iam:DeleteUser
iam:DeleteUserPolicy
iam:GetUserPolicy
iam:ListAccessKeys
iam:PutUserPolicy
iam:TagUser
iam:GetUserPolicy
iam:ListAccessKeys
s3:PutBucketPublicAccessBlock
s3:GetBucketPublicAccessBlock
s3:PutLifecycleConfiguration
s3:HeadBucket
s3:ListBucketMultipartUploads
s3:AbortMultipartUpload
Before you install OpenShift Container Platform, create a secondary IAM administrative user. As you
complete the Creating an IAM User in Your AWS Account procedure in the AWS documentation, set the
following options:
Procedure
2. Attach the AdministratorAccess policy to ensure that the account has sufficient permission to
create the cluster. This policy provides the cluster with the ability to grant credentials to each
OpenShift Container Platform component. The cluster grants the components only the
credentials that they require.
NOTE
14
CHAPTER 1. INSTALLING ON AWS
NOTE
While it is possible to create a policy that grants the all of the required AWS
permissions and attach it to the user, this is not the preferred option. The cluster
will not have the ability to grant additional credentials to individual components,
so the same credentials are used by all components.
4. Confirm that the user name that you specified is granted the AdministratorAccess policy.
5. Record the access key ID and secret access key values. You must use these values when you
configure your local machine to run the installation program.
IMPORTANT
You cannot use a temporary session token that you generated while using a
multi-factor authentication device to authenticate to AWS when you deploy a
cluster. The cluster continues to use your current AWS credentials to create AWS
resources for the entire life of the cluster, so you must use key-based, long-lived
credentials.
ap-northeast-1 (Tokyo)
ap-northeast-2 (Seoul)
ap-south-1 (Mumbai)
ap-southeast-1 (Singapore)
ap-southeast-2 (Sydney)
ca-central-1 (Central)
eu-central-1 (Frankfurt)
eu-north-1 (Stockholm)
eu-west-1 (Ireland)
eu-west-2 (London)
eu-west-3 (Paris)
me-south-1 (Bahrain)
us-east-2 (Ohio)
15
OpenShift Container Platform 4.4 Installing on AWS
us-west-2 (Oregon)
Next steps
Prerequisites
Review details about the OpenShift Container Platform installation and update processes.
IMPORTANT
If you have an AWS profile stored on your computer, it must not use a temporary
session token that you generated while using a multi-factor authentication
device. The cluster continues to use your current AWS credentials to create AWS
resources for the entire life of the cluster, so you must use key-based, long-lived
credentials. To generate appropriate keys, see Managing Access Keys for IAM
Users in the AWS documentation. You can supply the keys when you run the
installation program.
If you use a firewall, you must configure it to allow the sites that your cluster requires access to.
Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management and entitlement. If the cluster has internet access and you
do not disable Telemetry, that service automatically entitles your cluster. If the Telemetry
service cannot entitle your cluster, you must manually entitle it on the Cluster registration page.
Access Quay.io to obtain the packages that are required to install your cluster.
16
CHAPTER 1. INSTALLING ON AWS
IMPORTANT
If your cluster cannot have direct internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
internet access. Before you update the cluster, you update the content of the mirror
registry.
NOTE
You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.
NOTE
You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.
Procedure
1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:
1 Specify the path and file name, such as ~/.ssh/id_rsa, of the SSH key.
Running this command generates an SSH key that does not require a password in the location
that you specified.
IMPORTANT
If you create a new SSH key pair, avoid overwriting existing SSH keys.
17
OpenShift Container Platform 4.4 Installing on AWS
$ ssh-add <path>/<file_name> 1
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installation
program.
Prerequisites
A computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.
2. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.
IMPORTANT
The installation program creates several files on the computer that you use to
install your cluster. You must keep both the installation program and the files
that the installation program creates after you finish installing the cluster.
3. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:
4. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.
IMPORTANT
18
CHAPTER 1. INSTALLING ON AWS
IMPORTANT
You can run the create cluster command of the installation program only once, during
initial installation.
Prerequisites
Configure an account with the cloud platform that hosts your cluster.
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.
2 To view different installation details, specify warn, debug, or error instead of info.
IMPORTANT
NOTE
c. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter
the AWS access key ID and secret access key for the user that you configured to run the
installation program.
e. Select the base domain for the Route53 service that you configured for your cluster.
19
OpenShift Container Platform 4.4 Installing on AWS
g. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat OpenShift
Cluster Manager site.
NOTE
If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.
IMPORTANT
The Ignition config files that the installation program generates contain
certificates that expire after 24 hours. You must keep the cluster running for 24
hours in a non-degraded state to ensure that the first certificate rotation has
finished.
IMPORTANT
You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.
2. Optional: Remove or disable the AdministratorAccess policy from the IAM account that you
used to install the cluster.
IMPORTANT
If you installed an earlier version of oc, you cannot use it to complete all of the commands
in OpenShift Container Platform 4.4. Download and install the new version of oc.
Procedure
1. From the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site, navigate
to the page for your installation type and click Download Command-line Tools.
2. Click the folder for your operating system and architecture and click the compressed file.
NOTE
20
CHAPTER 1. INSTALLING ON AWS
$ oc <command>
Prerequisites
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2. Verify you can run oc commands successfully using the exported configuration:
$ oc whoami
system:admin
Next steps
Prerequisites
Review details about the OpenShift Container Platform installation and update processes.
IMPORTANT
21
OpenShift Container Platform 4.4 Installing on AWS
IMPORTANT
If you have an AWS profile stored on your computer, it must not use a temporary
session token that you generated while using a multi-factor authentication
device. The cluster continues to use your current AWS credentials to create AWS
resources for the entire life of the cluster, so you must use long-lived credentials.
To generate appropriate keys, see Managing Access Keys for IAM Users in the
AWS documentation. You can supply the keys when you run the installation
program.
If you use a firewall, you must configure it to allow the sites that your cluster requires access to.
Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management and entitlement. If the cluster has internet access and you
do not disable Telemetry, that service automatically entitles your cluster. If the Telemetry
service cannot entitle your cluster, you must manually entitle it on the Cluster registration page.
Access Quay.io to obtain the packages that are required to install your cluster.
IMPORTANT
If your cluster cannot have direct internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
internet access. Before you update the cluster, you update the content of the mirror
registry.
NOTE
You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.
NOTE
22
CHAPTER 1. INSTALLING ON AWS
NOTE
You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.
Procedure
1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:
1 Specify the path and file name, such as ~/.ssh/id_rsa, of the SSH key.
Running this command generates an SSH key that does not require a password in the location
that you specified.
IMPORTANT
If you create a new SSH key pair, avoid overwriting existing SSH keys.
$ ssh-add <path>/<file_name> 1
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installation
program.
Prerequisites
A computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
23
OpenShift Container Platform 4.4 Installing on AWS
1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.
2. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.
IMPORTANT
The installation program creates several files on the computer that you use to
install your cluster. You must keep both the installation program and the files
that the installation program creates after you finish installing the cluster.
3. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:
4. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.
Prerequisites
Download the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.
IMPORTANT
24
CHAPTER 1. INSTALLING ON AWS
NOTE
iii. If you do not have an Amazon Web Services (AWS) profile stored on your computer,
enter the AWS access key ID and secret access key for the user that you configured to
run the installation program.
v. Select the base domain for the Route53 service that you configured for your cluster.
vii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.
2. Modify the install-config.yaml file. You can find more information about the available
parameters in the Installation configuration parameters section.
3. Back up the install-config.yaml file so that you can use it to install multiple clusters.
IMPORTANT
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe
your account on the cloud platform that hosts your cluster and optionally customize your cluster’s
platform. When you create the install-config.yaml installation configuration file, you provide values for
the required parameters through the command line. If you customize your cluster, you can modify the
install-config.yaml file to provide more details about the platform.
NOTE
You cannot modify these parameters in the install-config.yaml file after installation.
25
OpenShift Container Platform 4.4 Installing on AWS
baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. This value is example.com .
used to create routes to your
OpenShift Container Platform
cluster components. The full
DNS name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.
controlPlane.pla The cloud provider to host the aws, azure , gcp , openstack, or {}
tform control plane machines. This
parameter value must match
the compute.platform
parameter value.
compute.platfor The cloud provider to host the aws, azure , gcp , openstack, or {}
m worker machines. This
parameter value must match
the controlPlane.platform
parameter value.
metadata.name The name of your cluster. A string that contains uppercase or lowercase letters,
such as dev.
platform. The region to deploy your A valid region for your cloud, such as us-east-1 for
<platform>.regi cluster in. AWS, centralus for Azure. Red Hat OpenStack
on Platform (RHOSP) does not use this parameter.
26
CHAPTER 1. INSTALLING ON AWS
sshKey The SSH key to use to access your A valid, local public SSH key that you
cluster machines. added to the ssh-agent process.
NOTE
For production
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
27
OpenShift Container Platform 4.4 Installing on AWS
compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
controlPlane.replica The number of control plane machines A positive integer greater than or equal
s to provision. to 3. The default value is 3.
compute.platfor The size in GiB of the root Integer, for example 500.
m.aws.rootVolu volume.
me.size
compute.platfor The instance type of the root Valid AWS EBS instance type, such as io1.
m.aws.rootVolu volume.
me.type
compute.platfor The EC2 instance type for the Valid AWS instance type, such as c5.9xlarge .
m.aws.type compute machines.
28
CHAPTER 1. INSTALLING ON AWS
compute.platfor The availability zones where A list of valid AWS availability zones, such as us-
m.aws.zones the installation program east-1c, in a YAML sequence.
creates machines for the
compute MachinePool. If you
provide your own VPC, you
must provide a subnet in that
availability zone.
compute.aws.re The AWS region that the Valid AWS region, such as us-east-1.
gion installation program creates
compute resources in.
controlPlane.pla The EC2 instance type for the Valid AWS instance type, such as c5.9xlarge .
tform.aws.type control plane machines.
controlPlane.pla The availability zones where A list of valid AWS availability zones, such as us-
tform.aws.zone the installation program east-1c, in a YAML sequence.
s creates machines for the
control plane MachinePool.
controlPlane.aw The AWS region that the Valid AWS region, such as us-east-1.
s.region installation program creates
control plane resources in.
platform.aws.us A map of keys and values that Any valid YAML map, such as key value pairs in the
erTags the installation program adds <key>: <value> format. For more information
as tags to all resources that it about AWS tags, see Tagging Your Amazon EC2
creates. Resources in the AWS documentation.
You can customize the install-config.yaml file to specify more details about your OpenShift Container
Platform cluster’s platform or modify the values of the required parameters.
29
OpenShift Container Platform 4.4 Installing on AWS
IMPORTANT
This sample YAML file is provided for reference only. You must obtain your install-
config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com 1
controlPlane: 2
hyperthreading: Enabled 3 4
name: master
platform:
aws:
zones:
- us-west-2a
- us-west-2b
rootVolume:
iops: 4000
size: 500
type: io1
type: m5.xlarge 5
replicas: 3
compute: 6
- hyperthreading: Enabled 7
name: worker
platform:
aws:
rootVolume:
iops: 2000
size: 500
type: io1 8
type: c5.4xlarge
zones:
- us-west-2c
replicas: 3
metadata:
name: test-cluster 9
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
aws:
region: us-west-2 10
userTags:
adminContact: jdoe
costCenter: 7536
pullSecret: '{"auths": ...}' 11
fips: false 12
sshKey: ssh-ed25519 AAAA... 13
30
CHAPTER 1. INSTALLING ON AWS
2 6 If you do not provide these parameters and values, the installation program provides the default
value.
3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings.
To meet the requirements of the different data structures, the first line of the compute section
must begin with a hyphen, -, and the first line of the controlPlane section must not. Although both
sections currently define a single machine pool, it is possible that future versions of OpenShift
Container Platform will support defining multiple compute pools during installation. Only one
control plane pool is used.
IMPORTANT
8 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and
set iops to 2000.
12 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is
enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container
Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography
modules that are provided with RHCOS instead.
13 You can optionally provide the sshKey value that you use to access the machines in your cluster.
NOTE
For production OpenShift Container Platform clusters on which you want to perform
installation debugging or disaster recovery, specify an SSH key that your ssh-agent
process uses.
IMPORTANT
You can run the create cluster command of the installation program only once, during
initial installation.
Prerequisites
Configure an account with the cloud platform that hosts your cluster.
31
OpenShift Container Platform 4.4 Installing on AWS
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
2 To view different installation details, specify warn, debug, or error instead of info.
NOTE
If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.
IMPORTANT
The Ignition config files that the installation program generates contain
certificates that expire after 24 hours. You must keep the cluster running for 24
hours in a non-degraded state to ensure that the first certificate rotation has
finished.
IMPORTANT
You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.
2. Optional: Remove or disable the AdministratorAccess policy from the IAM account that you
used to install the cluster.
IMPORTANT
If you installed an earlier version of oc, you cannot use it to complete all of the commands
in OpenShift Container Platform 4.4. Download and install the new version of oc.
Procedure
1. From the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site, navigate
32
CHAPTER 1. INSTALLING ON AWS
1. From the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site, navigate
to the page for your installation type and click Download Command-line Tools.
2. Click the folder for your operating system and architecture and click the compressed file.
NOTE
$ oc <command>
Prerequisites
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2. Verify you can run oc commands successfully using the exported configuration:
$ oc whoami
system:admin
Next steps
33
OpenShift Container Platform 4.4 Installing on AWS
You must set most of the network configuration parameters during installation, and you can modify only
kubeProxy configuration parameters in a running cluster.
Prerequisites
Review details about the OpenShift Container Platform installation and update processes.
IMPORTANT
If you have an AWS profile stored on your computer, it must not use a temporary
session token that you generated while using a multi-factor authentication
device. The cluster continues to use your current AWS credentials to create AWS
resources for the entire life of the cluster, so you must use key-based, long-lived
credentials. To generate appropriate keys, see Managing Access Keys for IAM
Users in the AWS documentation. You can supply the keys when you run the
installation program.
If you use a firewall, you must configure it to allow the sites that your cluster requires access to.
Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management and entitlement. If the cluster has internet access and you
do not disable Telemetry, that service automatically entitles your cluster. If the Telemetry
service cannot entitle your cluster, you must manually entitle it on the Cluster registration page.
Access Quay.io to obtain the packages that are required to install your cluster.
IMPORTANT
34
CHAPTER 1. INSTALLING ON AWS
IMPORTANT
If your cluster cannot have direct internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
internet access. Before you update the cluster, you update the content of the mirror
registry.
NOTE
You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.
NOTE
You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.
Procedure
1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:
1 Specify the path and file name, such as ~/.ssh/id_rsa, of the SSH key.
Running this command generates an SSH key that does not require a password in the location
that you specified.
IMPORTANT
If you create a new SSH key pair, avoid overwriting existing SSH keys.
35
OpenShift Container Platform 4.4 Installing on AWS
$ ssh-add <path>/<file_name> 1
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installation
program.
Prerequisites
A computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.
2. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.
IMPORTANT
The installation program creates several files on the computer that you use to
install your cluster. You must keep both the installation program and the files
that the installation program creates after you finish installing the cluster.
3. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:
4. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.
Prerequisites
Download the OpenShift Container Platform installation program and the pull secret for your
36
CHAPTER 1. INSTALLING ON AWS
Download the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.
IMPORTANT
NOTE
iii. If you do not have an Amazon Web Services (AWS) profile stored on your computer,
enter the AWS access key ID and secret access key for the user that you configured to
run the installation program.
v. Select the base domain for the Route53 service that you configured for your cluster.
vii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.
2. Modify the install-config.yaml file. You can find more information about the available
parameters in the Installation configuration parameters section.
3. Back up the install-config.yaml file so that you can use it to install multiple clusters.
IMPORTANT
37
OpenShift Container Platform 4.4 Installing on AWS
IMPORTANT
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe
your account on the cloud platform that hosts your cluster and optionally customize your cluster’s
platform. When you create the install-config.yaml installation configuration file, you provide values for
the required parameters through the command line. If you customize your cluster, you can modify the
install-config.yaml file to provide more details about the platform.
NOTE
You cannot modify these parameters in the install-config.yaml file after installation.
baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. This value is example.com .
used to create routes to your
OpenShift Container Platform
cluster components. The full
DNS name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.
controlPlane.pla The cloud provider to host the aws, azure , gcp , openstack, or {}
tform control plane machines. This
parameter value must match
the compute.platform
parameter value.
compute.platfor The cloud provider to host the aws, azure , gcp , openstack, or {}
m worker machines. This
parameter value must match
the controlPlane.platform
parameter value.
metadata.name The name of your cluster. A string that contains uppercase or lowercase letters,
such as dev.
platform. The region to deploy your A valid region for your cloud, such as us-east-1 for
<platform>.regi cluster in. AWS, centralus for Azure. Red Hat OpenStack
on Platform (RHOSP) does not use this parameter.
38
CHAPTER 1. INSTALLING ON AWS
sshKey The SSH key to use to access your A valid, local public SSH key that you
cluster machines. added to the ssh-agent process.
NOTE
For production
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.
39
OpenShift Container Platform 4.4 Installing on AWS
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
controlPlane.replica The number of control plane machines A positive integer greater than or equal
s to provision. to 3. The default value is 3.
40
CHAPTER 1. INSTALLING ON AWS
compute.platfor The size in GiB of the root Integer, for example 500.
m.aws.rootVolu volume.
me.size
compute.platfor The instance type of the root Valid AWS EBS instance type, such as io1.
m.aws.rootVolu volume.
me.type
compute.platfor The EC2 instance type for the Valid AWS instance type, such as c5.9xlarge .
m.aws.type compute machines.
compute.platfor The availability zones where A list of valid AWS availability zones, such as us-
m.aws.zones the installation program east-1c, in a YAML sequence.
creates machines for the
compute MachinePool. If you
provide your own VPC, you
must provide a subnet in that
availability zone.
compute.aws.re The AWS region that the Valid AWS region, such as us-east-1.
gion installation program creates
compute resources in.
controlPlane.pla The EC2 instance type for the Valid AWS instance type, such as c5.9xlarge .
tform.aws.type control plane machines.
controlPlane.pla The availability zones where A list of valid AWS availability zones, such as us-
tform.aws.zone the installation program east-1c, in a YAML sequence.
s creates machines for the
control plane MachinePool.
controlPlane.aw The AWS region that the Valid AWS region, such as us-east-1.
s.region installation program creates
control plane resources in.
platform.aws.us A map of keys and values that Any valid YAML map, such as key value pairs in the
erTags the installation program adds <key>: <value> format. For more information
as tags to all resources that it about AWS tags, see Tagging Your Amazon EC2
creates. Resources in the AWS documentation.
41
OpenShift Container Platform 4.4 Installing on AWS
IMPORTANT
The Open Virtual Networking (OVN) Kubernetes network plug-in is a Technology Preview
feature only. Technology Preview features are not supported with Red Hat production
service level agreements (SLAs) and might not be functionally complete. Red Hat does
not recommend using them in production. These features provide early access to
upcoming product features, enabling customers to test functionality and provide
feedback during the development process.
For more information about the support scope of the OVN Technology Preview, see
https://access.redhat.com/articles/4380121.
You can modify your cluster network configuration parameters in the install-config.yaml configuration
file. The following table describes the parameters.
NOTE
You cannot modify these parameters in the install-config.yaml file after installation.
networking.net The Pod network provider plug-in to deploy. The Either OpenShiftSDN or
workType OpenShiftSDN plug-in is the only plug-in OVNKubernetes. The
supported in OpenShift Container Platform 4.4. The default value is
OVNKubernetes plug-in is available as a OpenShiftSDN .
Technology Preview in OpenShift Container
Platform 4.4.
42
CHAPTER 1. INSTALLING ON AWS
networking.clus The subnet prefix length to assign to each individual A subnet prefix. The default
terNetwork[].ho node. For example, if hostPrefix is set to 23, then value is 23.
stPrefix each node is assigned a /23 subnet out of the given
cidr, allowing for 510 (2^(32 - 23) - 2) Pod IP
addresses.
You can customize the install-config.yaml file to specify more details about your OpenShift Container
Platform cluster’s platform or modify the values of the required parameters.
IMPORTANT
This sample YAML file is provided for reference only. You must obtain your install-
config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com 1
controlPlane: 2
hyperthreading: Enabled 3 4
name: master
platform:
aws:
zones:
- us-west-2a
- us-west-2b
rootVolume:
iops: 4000
size: 500
type: io1
type: m5.xlarge 5
43
OpenShift Container Platform 4.4 Installing on AWS
replicas: 3
compute: 6
- hyperthreading: Enabled 7
name: worker
platform:
aws:
rootVolume:
iops: 2000
size: 500
type: io1 8
type: c5.4xlarge
zones:
- us-west-2c
replicas: 3
metadata:
name: test-cluster 9
networking: 10
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
aws:
region: us-west-2 11
userTags:
adminContact: jdoe
costCenter: 7536
pullSecret: '{"auths": ...}' 12
fips: false 13
sshKey: ssh-ed25519 AAAA... 14
2 6 10 If you do not provide these parameters and values, the installation program provides the
default value.
3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings.
To meet the requirements of the different data structures, the first line of the compute section
must begin with a hyphen, -, and the first line of the controlPlane section must not. Although both
sections currently define a single machine pool, it is possible that future versions of OpenShift
Container Platform will support defining multiple compute pools during installation. Only one
control plane pool is used.
IMPORTANT
44
CHAPTER 1. INSTALLING ON AWS
IMPORTANT
8 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and
set iops to 2000.
13 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is
enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container
Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography
modules that are provided with RHCOS instead.
14 You can optionally provide the sshKey value that you use to access the machines in your cluster.
NOTE
For production OpenShift Container Platform clusters on which you want to perform
installation debugging or disaster recovery, specify an SSH key that your ssh-agent
process uses.
IMPORTANT
Modifying the OpenShift Container Platform manifest files directly is not supported.
Prerequisites
Procedure
1 For <installation_directory>, specify the name of the directory that contains the install-
config.yaml file for your cluster.
$ touch <installation_directory>/manifests/cluster-network-03-config.yml 1
45
OpenShift Container Platform 4.4 Installing on AWS
1 For <installation_directory>, specify the directory name that contains the manifests/
directory for your cluster.
After creating the file, several network configuration files are in the manifests/ directory, as
shown:
$ ls <installation_directory>/manifests/cluster-network-*
cluster-network-01-crd.yml
cluster-network-02-config.yml
cluster-network-03-config.yml
3. Open the cluster-network-03-config.yml file in an editor and enter a CR that describes the
Operator configuration you want:
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec: 1
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
serviceNetwork:
- 172.30.0.0/16
defaultNetwork:
type: OpenShiftSDN
openshiftSDNConfig:
mode: NetworkPolicy
mtu: 1450
vxlanPort: 4789
1 The parameters for the spec parameter are only an example. Specify your configuration
for the Cluster Network Operator in the CR.
The CNO provides default values for the parameters in the CR, so you must specify only the
parameters that you want to change.
You can specify the cluster network configuration for your OpenShift Container Platform cluster by
setting the parameter values for the defaultNetwork parameter in the CNO CR. The following CR
displays the default configuration for the CNO and explains both the parameters you can configure and
the valid parameter values:
46
CHAPTER 1. INSTALLING ON AWS
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
clusterNetwork: 1
- cidr: 10.128.0.0/14
hostPrefix: 23
serviceNetwork: 2
- 172.30.0.0/16
defaultNetwork: 3
...
kubeProxyConfig: 4
iptablesSyncPeriod: 30s 5
proxyArguments:
iptables-min-sync-period: 6
- 30s
4 The parameters for this object specify the kube-proxy configuration. If you do not specify the
parameter values, the Network Operator applies the displayed default parameter values.
5 The refresh period for iptables rules. The default value is 30s. Valid suffixes include s, m, and h
and are described in the Go time package documentation.
6 The minimum duration before refreshing iptables rules. This parameter ensures that the refresh
does not happen too frequently. Valid suffixes include s, m, and h and are described in the Go time
package.
The following YAML object describes the configuration parameters for the OpenShift SDN Pod network
provider.
defaultNetwork:
type: OpenShiftSDN 1
openshiftSDNConfig: 2
mode: NetworkPolicy 3
mtu: 1450 4
vxlanPort: 4789 5
2 Specify only if you want to override part of the OpenShift SDN configuration.
3 Configures the network isolation mode for OpenShift SDN. The allowed values are Multitenant,
Subnet, or NetworkPolicy. The default value is NetworkPolicy.
4 The maximum transmission unit (MTU) for the VXLAN overlay network. This value is normally
47
OpenShift Container Platform 4.4 Installing on AWS
5 The port to use for all VXLAN packets. The default value is 4789. If you are running in a virtualized
environment with existing nodes that are part of another VXLAN network, then you might be
On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port
9000 and port 9999.
The following YAML object describes the configuration parameters for the OVN-Kubernetes Pod
network provider.
defaultNetwork:
type: OVNKubernetes 1
ovnKubernetesConfig: 2
mtu: 1450 3
genevePort: 6081 4
3 The MTU for the Generic Network Virtualization Encapsulation (GENEVE) overlay network. This
value is normally configured automatically, but if the nodes in your cluster do not all use the same
MTU, then you must set this explicitly to 100 less than the smallest node MTU value.
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
serviceNetwork:
- 172.30.0.0/16
defaultNetwork:
type: OpenShiftSDN
openshiftSDNConfig:
mode: NetworkPolicy
mtu: 1450
vxlanPort: 4789
kubeProxyConfig:
iptablesSyncPeriod: 30s
48
CHAPTER 1. INSTALLING ON AWS
proxyArguments:
iptables-min-sync-period:
- 30s
IMPORTANT
You can run the create cluster command of the installation program only once, during
initial installation.
Prerequisites
Configure an account with the cloud platform that hosts your cluster.
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
2 To view different installation details, specify warn, debug, or error instead of info.
NOTE
If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.
IMPORTANT
The Ignition config files that the installation program generates contain
certificates that expire after 24 hours. You must keep the cluster running for 24
hours in a non-degraded state to ensure that the first certificate rotation has
finished.
IMPORTANT
You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.
49
OpenShift Container Platform 4.4 Installing on AWS
2. Optional: Remove or disable the AdministratorAccess policy from the IAM account that you
used to install the cluster.
IMPORTANT
If you installed an earlier version of oc, you cannot use it to complete all of the commands
in OpenShift Container Platform 4.4. Download and install the new version of oc.
Procedure
1. From the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site, navigate
to the page for your installation type and click Download Command-line Tools.
2. Click the folder for your operating system and architecture and click the compressed file.
NOTE
$ oc <command>
Prerequisites
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
50
CHAPTER 1. INSTALLING ON AWS
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2. Verify you can run oc commands successfully using the exported configuration:
$ oc whoami
system:admin
Next steps
Prerequisites
Review details about the OpenShift Container Platform installation and update processes.
IMPORTANT
If you have an AWS profile stored on your computer, it must not use a temporary
session token that you generated while using a multi-factor authentication
device. The cluster continues to use your current AWS credentials to create AWS
resources for the entire life of the cluster, so you must use long-lived credentials.
To generate appropriate keys, see Managing Access Keys for IAM Users in the
AWS documentation. You can supply the keys when you run the installation
program.
If you use a firewall, you must configure it to allow the sites that your cluster requires access to.
Because the installation program cannot know what other components are also in your existing subnets,
it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the
subnets that you install your cluster to yourself.
51
OpenShift Container Platform 4.4 Installing on AWS
Internet gateways
NAT gateways
Subnets
Route tables
VPCs
VPC endpoints
If you use a custom VPC, you must correctly configure it and its subnets for the installation program and
the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set
route tables for the subnets, or set VPC options like DHCP, so you must do so before you install the
cluster.
The VPC’s CIDR block must contain the Networking.MachineCIDR range, which is the IP
address pool for cluster machines.
You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC so
that the cluster can use the Route53 zones that are attached to the VPC to resolve cluster’s
internal DNS records. See DNS Support in Your VPC in the AWS documentation.
If you use a cluster with public access, you must create a public and a private subnet for each availability
zone that your cluster uses. The installation program modifies your subnets to add the
kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for
it. Review the current Tag Restrictions in the AWS documentation to ensure that the installation
program can add a tag to each subnet that you specify.
If you are working in a disconnected environment, you are unable to reach the public IP addresses for
EC2 and ELB endpoints. To resolve this, you must create a VPC endpoint and attach it to the subnet
that the clusters are using. The endpoints should be named as follows:
ec2.<region>.amazonaws.com
elasticloadbalancing.<region>.amazonaws.com
s3.<region>.amazonaws.com
52
CHAPTER 1. INSTALLING ON AWS
AWS::EC2::NatGateway
AWS::EC2::EIP
80 Inbound HTTP
traffic
22 Inbound SSH
traffic
0 - 65535 Outbound
ephemeral traffic
53
OpenShift Container Platform 4.4 Installing on AWS
To ensure that the subnets that you provide are suitable, the installation program confirms the following
data:
The subnet CIDRs belong to the machine CIDR that you specified.
You provide subnets for each availability zone. Each availability zone contains no more than one
public and one private subnet. If you use a private cluster, provide only a private subnet for each
availability zone. Otherwise, provide exactly one public and private subnet for each availability
zone.
You provide a public subnet for each private subnet availability zone. Machines are not
provisioned in availability zones that you do not provide private subnets for.
If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the
OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed
from the subnets that it used.
Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required
for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics
the division of permissions that you might have at your company: some individuals can create different
resource in your clouds than others. For example, you might be able to create application-specific items,
like instances, buckets, and load balancers, but not networking-related components such as VPCs,
subnets, or ingress rules.
The AWS credentials that you use when you create your cluster do not need the networking permissions
that are required to make VPCs and core networking components within the VPC, such as subnets,
routing tables, internet gateways, NAT, and VPN. You still need permission to make the application
resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and
nodes.
If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is
reduced in the following ways:
54
CHAPTER 1. INSTALLING ON AWS
You can install multiple OpenShift Container Platform clusters in the same VPC.
Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network.
Control plane TCP 22623 ingress (MCS) is allowed to the entire network.
Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management and entitlement. If the cluster has internet access and you
do not disable Telemetry, that service automatically entitles your cluster. If the Telemetry
service cannot entitle your cluster, you must manually entitle it on the Cluster registration page.
Access Quay.io to obtain the packages that are required to install your cluster.
IMPORTANT
If your cluster cannot have direct internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
internet access. Before you update the cluster, you update the content of the mirror
registry.
NOTE
You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.
NOTE
55
OpenShift Container Platform 4.4 Installing on AWS
NOTE
You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.
Procedure
1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:
1 Specify the path and file name, such as ~/.ssh/id_rsa, of the SSH key.
Running this command generates an SSH key that does not require a password in the location
that you specified.
IMPORTANT
If you create a new SSH key pair, avoid overwriting existing SSH keys.
$ ssh-add <path>/<file_name> 1
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installation
program.
Prerequisites
A computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
56
CHAPTER 1. INSTALLING ON AWS
1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.
2. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.
IMPORTANT
The installation program creates several files on the computer that you use to
install your cluster. You must keep both the installation program and the files
that the installation program creates after you finish installing the cluster.
3. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:
4. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.
Prerequisites
Download the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.
IMPORTANT
57
OpenShift Container Platform 4.4 Installing on AWS
NOTE
iii. If you do not have an Amazon Web Services (AWS) profile stored on your computer,
enter the AWS access key ID and secret access key for the user that you configured to
run the installation program.
v. Select the base domain for the Route53 service that you configured for your cluster.
vii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.
2. Modify the install-config.yaml file. You can find more information about the available
parameters in the Installation configuration parameters section.
3. Back up the install-config.yaml file so that you can use it to install multiple clusters.
IMPORTANT
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe
your account on the cloud platform that hosts your cluster and optionally customize your cluster’s
platform. When you create the install-config.yaml installation configuration file, you provide values for
the required parameters through the command line. If you customize your cluster, you can modify the
install-config.yaml file to provide more details about the platform.
NOTE
You cannot modify these parameters in the install-config.yaml file after installation.
58
CHAPTER 1. INSTALLING ON AWS
baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. This value is example.com .
used to create routes to your
OpenShift Container Platform
cluster components. The full
DNS name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.
controlPlane.pla The cloud provider to host the aws, azure , gcp , openstack, or {}
tform control plane machines. This
parameter value must match
the compute.platform
parameter value.
compute.platfor The cloud provider to host the aws, azure , gcp , openstack, or {}
m worker machines. This
parameter value must match
the controlPlane.platform
parameter value.
metadata.name The name of your cluster. A string that contains uppercase or lowercase letters,
such as dev.
platform. The region to deploy your A valid region for your cloud, such as us-east-1 for
<platform>.regi cluster in. AWS, centralus for Azure. Red Hat OpenStack
on Platform (RHOSP) does not use this parameter.
59
OpenShift Container Platform 4.4 Installing on AWS
sshKey The SSH key to use to access your A valid, local public SSH key that you
cluster machines. added to the ssh-agent process.
NOTE
For production
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
60
CHAPTER 1. INSTALLING ON AWS
compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
controlPlane.replica The number of control plane machines A positive integer greater than or equal
s to provision. to 3. The default value is 3.
compute.platfor The size in GiB of the root Integer, for example 500.
m.aws.rootVolu volume.
me.size
compute.platfor The instance type of the root Valid AWS EBS instance type, such as io1.
m.aws.rootVolu volume.
me.type
compute.platfor The EC2 instance type for the Valid AWS instance type, such as c5.9xlarge .
m.aws.type compute machines.
61
OpenShift Container Platform 4.4 Installing on AWS
compute.platfor The availability zones where A list of valid AWS availability zones, such as us-
m.aws.zones the installation program east-1c, in a YAML sequence.
creates machines for the
compute MachinePool. If you
provide your own VPC, you
must provide a subnet in that
availability zone.
compute.aws.re The AWS region that the Valid AWS region, such as us-east-1.
gion installation program creates
compute resources in.
controlPlane.pla The EC2 instance type for the Valid AWS instance type, such as c5.9xlarge .
tform.aws.type control plane machines.
controlPlane.pla The availability zones where A list of valid AWS availability zones, such as us-
tform.aws.zone the installation program east-1c, in a YAML sequence.
s creates machines for the
control plane MachinePool.
controlPlane.aw The AWS region that the Valid AWS region, such as us-east-1.
s.region installation program creates
control plane resources in.
platform.aws.us A map of keys and values that Any valid YAML map, such as key value pairs in the
erTags the installation program adds <key>: <value> format. For more information
as tags to all resources that it about AWS tags, see Tagging Your Amazon EC2
creates. Resources in the AWS documentation.
You can customize the install-config.yaml file to specify more details about your OpenShift Container
Platform cluster’s platform or modify the values of the required parameters.
62
CHAPTER 1. INSTALLING ON AWS
IMPORTANT
This sample YAML file is provided for reference only. You must obtain your install-
config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com 1
controlPlane: 2
hyperthreading: Enabled 3 4
name: master
platform:
aws:
zones:
- us-west-2a
- us-west-2b
rootVolume:
iops: 4000
size: 500
type: io1
type: m5.xlarge 5
replicas: 3
compute: 6
- hyperthreading: Enabled 7
name: worker
platform:
aws:
rootVolume:
iops: 2000
size: 500
type: io1 8
type: c5.4xlarge
zones:
- us-west-2c
replicas: 3
metadata:
name: test-cluster 9
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
aws:
region: us-west-2 10
userTags:
adminContact: jdoe
costCenter: 7536
subnets: 11
- subnet-1
- subnet-2
63
OpenShift Container Platform 4.4 Installing on AWS
- subnet-3
pullSecret: '{"auths": ...}' 12
fips: false 13
sshKey: ssh-ed25519 AAAA... 14
2 6 If you do not provide these parameters and values, the installation program provides the default
value.
3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings.
To meet the requirements of the different data structures, the first line of the compute section
must begin with a hyphen, -, and the first line of the controlPlane section must not. Although both
sections currently define a single machine pool, it is possible that future versions of OpenShift
Container Platform will support defining multiple compute pools during installation. Only one
control plane pool is used.
IMPORTANT
8 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and
set iops to 2000.
11 If you provide your own VPC, specify subnets for each availability zone that your cluster uses.
13 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is
enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container
Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography
modules that are provided with RHCOS instead.
14 You can optionally provide the sshKey value that you use to access the machines in your cluster.
NOTE
For production OpenShift Container Platform clusters on which you want to perform
installation debugging or disaster recovery, specify an SSH key that your ssh-agent
process uses.
Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS
proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by
configuring the proxy settings in the install-config.yaml file.
Prerequisites
64
CHAPTER 1. INSTALLING ON AWS
Prerequisites
Review the sites that your cluster requires access to and determine whether any need to bypass
the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider
APIs. Add sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.
NOTE
Procedure
1. Edit your install-config.yaml file and add the proxy settings. For example:
apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> 1
httpsProxy: http://<username>:<pswd>@<ip>:<port> 2
noProxy: example.com 3
additionalTrustBundle: | 4
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
...
1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme
must be http. If you use an MITM transparent proxy network that does not require
additional proxy configuration but requires additional CAs, you must not specify an
httpProxy value.
2 A proxy URL to use for creating HTTPS connections outside the cluster. If this field is not
specified, then httpProxy is used for both HTTP and HTTPS connections. The URL
scheme must be http; https is currently not supported. If you use an MITM transparent
proxy network that does not require additional proxy configuration but requires additional
CAs, you must not specify an httpsProxy value.
65
OpenShift Container Platform 4.4 Installing on AWS
NOTE
The installation program does not support the proxy readinessEndpoints field.
2. Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings
in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still
created, but it will have a nil spec.
NOTE
Only the Proxy object named cluster is supported, and no additional proxies can be
created.
IMPORTANT
You can run the create cluster command of the installation program only once, during
initial installation.
Prerequisites
Configure an account with the cloud platform that hosts your cluster.
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
2 To view different installation details, specify warn, debug, or error instead of info.
NOTE
If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.
IMPORTANT
66
CHAPTER 1. INSTALLING ON AWS
IMPORTANT
The Ignition config files that the installation program generates contain
certificates that expire after 24 hours. You must keep the cluster running for 24
hours in a non-degraded state to ensure that the first certificate rotation has
finished.
IMPORTANT
You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.
2. Optional: Remove or disable the AdministratorAccess policy from the IAM account that you
used to install the cluster.
IMPORTANT
If you installed an earlier version of oc, you cannot use it to complete all of the commands
in OpenShift Container Platform 4.4. Download and install the new version of oc.
Procedure
1. From the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site, navigate
to the page for your installation type and click Download Command-line Tools.
2. Click the folder for your operating system and architecture and click the compressed file.
NOTE
$ oc <command>
Prerequisites
67
OpenShift Container Platform 4.4 Installing on AWS
Prerequisites
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2. Verify you can run oc commands successfully using the exported configuration:
$ oc whoami
system:admin
Next steps
Prerequisites
Review details about the OpenShift Container Platform installation and update processes.
IMPORTANT
If you have an AWS profile stored on your computer, it must not use a temporary
session token that you generated while using a multi-factor authentication
device. The cluster continues to use your current AWS credentials to create AWS
resources for the entire life of the cluster, so you must use long-lived credentials.
To generate appropriate keys, see Managing Access Keys for IAM Users in the
AWS documentation. You can supply the keys when you run the installation
program.
If you use a firewall, you must configure it to allow the sites that your cluster requires access to.
If your environment does not require an external internet connection, you can deploy a private
68
CHAPTER 1. INSTALLING ON AWS
If your environment does not require an external internet connection, you can deploy a private
OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are
accessible from only an internal network and are not visible to the Internet.
By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints.
A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your
cluster. This means that the cluster resources are only accessible from your internal network and are not
visible to the internet.
To deploy a private cluster, you must use existing networking that meets your requirements. Your cluster
resources might be shared between other clusters on the network.
Additionally, you must deploy a private cluster from a machine that has access the API services for the
cloud you provision to, the hosts on the network that you provision, and to the internet to obtain
installation media. You can use any machine that meets these access requirements and follows your
company’s guidelines. For example, this machine can be a bastion host on your cloud network or a
machine that has access to the network through a VPN.
To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC
and subnets to host the cluster. The installation program must also be able to resolve the DNS records
that the cluster requires. The installation program configures the Ingress Operator and API server for
access from only the private network.
The cluster still requires access to Internet to access the AWS APIs.
The following items are not required or created when you install a private cluster:
Public subnets
A public Route 53 Zone that matches the baseDomain for the cluster
The installation program does use the baseDomain that you specify to create a private Route 53 Zone
and the required records for the cluster. The cluster is configured so that the Operators do not create
public records for the cluster and all cluster machines are placed in the private subnets that you specify.
1.6.1.1.1. Limitations
You cannot make the Kubernetes API endpoints public after installation without taking
additional actions, including creating public subnets in the VPC for each availablity zone in use,
creating a public load balancer, and configuring the control plane security groups to allow traffic
from Internet on 6443 (Kubernetes API port).
If you use a public Service type load balancer, you must tag a public subnet in each availability
zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to
create public load balancers.
69
OpenShift Container Platform 4.4 Installing on AWS
Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new
accounts or more easily abide by the operational constraints that your company’s guidelines set. If you
cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use
this installation option.
Because the installation program cannot know what other components are also in your existing subnets,
it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the
subnets that you install your cluster to yourself.
Internet gateways
NAT gateways
Subnets
Route tables
VPCs
VPC endpoints
If you use a custom VPC, you must correctly configure it and its subnets for the installation program and
the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set
route tables for the subnets, or set VPC options like DHCP, so you must do so before you install the
cluster.
The VPC’s CIDR block must contain the Networking.MachineCIDR range, which is the IP
address pool for cluster machines.
You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC so
that the cluster can use the Route53 zones that are attached to the VPC to resolve cluster’s
internal DNS records. See DNS Support in Your VPC in the AWS documentation.
If you use a cluster with public access, you must create a public and a private subnet for each availability
zone that your cluster uses. The installation program modifies your subnets to add the
kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for
it. Review the current Tag Restrictions in the AWS documentation to ensure that the installation
program can add a tag to each subnet that you specify.
If you are working in a disconnected environment, you are unable to reach the public IP addresses for
EC2 and ELB endpoints. To resolve this, you must create a VPC endpoint and attach it to the subnet
that the clusters are using. The endpoints should be named as follows:
ec2.<region>.amazonaws.com
elasticloadbalancing.<region>.amazonaws.com
70
CHAPTER 1. INSTALLING ON AWS
s3.<region>.amazonaws.com
AWS::EC2::NatGateway
AWS::EC2::EIP
80 Inbound HTTP
traffic
22 Inbound SSH
traffic
71
OpenShift Container Platform 4.4 Installing on AWS
0 - 65535 Outbound
ephemeral traffic
To ensure that the subnets that you provide are suitable, the installation program confirms the following
data:
The subnet CIDRs belong to the machine CIDR that you specified.
You provide subnets for each availability zone. Each availability zone contains no more than one
public and one private subnet. If you use a private cluster, provide only a private subnet for each
availability zone. Otherwise, provide exactly one public and private subnet for each availability
zone.
You provide a public subnet for each private subnet availability zone. Machines are not
provisioned in availability zones that you do not provide private subnets for.
If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the
OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed
from the subnets that it used.
Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required
for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics
the division of permissions that you might have at your company: some individuals can create different
resource in your clouds than others. For example, you might be able to create application-specific items,
like instances, buckets, and load balancers, but not networking-related components such as VPCs,
subnets, or ingress rules.
The AWS credentials that you use when you create your cluster do not need the networking permissions
that are required to make VPCs and core networking components within the VPC, such as subnets,
routing tables, internet gateways, NAT, and VPN. You still need permission to make the application
resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and
nodes.
72
CHAPTER 1. INSTALLING ON AWS
If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is
reduced in the following ways:
You can install multiple OpenShift Container Platform clusters in the same VPC.
Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network.
Control plane TCP 22623 ingress (MCS) is allowed to the entire network.
Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management and entitlement. If the cluster has internet access and you
do not disable Telemetry, that service automatically entitles your cluster. If the Telemetry
service cannot entitle your cluster, you must manually entitle it on the Cluster registration page.
Access Quay.io to obtain the packages that are required to install your cluster.
IMPORTANT
If your cluster cannot have direct internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
internet access. Before you update the cluster, you update the content of the mirror
registry.
NOTE
You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.
73
OpenShift Container Platform 4.4 Installing on AWS
NOTE
You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.
Procedure
1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:
1 Specify the path and file name, such as ~/.ssh/id_rsa, of the SSH key.
Running this command generates an SSH key that does not require a password in the location
that you specified.
IMPORTANT
If you create a new SSH key pair, avoid overwriting existing SSH keys.
$ ssh-add <path>/<file_name> 1
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installation
program.
Prerequisites
A computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
74
CHAPTER 1. INSTALLING ON AWS
1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.
2. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.
IMPORTANT
The installation program creates several files on the computer that you use to
install your cluster. You must keep both the installation program and the files
that the installation program creates after you finish installing the cluster.
3. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:
4. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.
Prerequisites
Download the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.
IMPORTANT
75
OpenShift Container Platform 4.4 Installing on AWS
NOTE
iii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.
2. Modify the install-config.yaml file. You can find more information about the available
parameters in the Installation configuration parameters section.
3. Back up the install-config.yaml file so that you can use it to install multiple clusters.
IMPORTANT
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe
your account on the cloud platform that hosts your cluster and optionally customize your cluster’s
platform. When you create the install-config.yaml installation configuration file, you provide values for
the required parameters through the command line. If you customize your cluster, you can modify the
install-config.yaml file to provide more details about the platform.
NOTE
You cannot modify these parameters in the install-config.yaml file after installation.
baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. This value is example.com .
used to create routes to your
OpenShift Container Platform
cluster components. The full
DNS name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.
76
CHAPTER 1. INSTALLING ON AWS
controlPlane.pla The cloud provider to host the aws, azure , gcp , openstack, or {}
tform control plane machines. This
parameter value must match
the compute.platform
parameter value.
compute.platfor The cloud provider to host the aws, azure , gcp , openstack, or {}
m worker machines. This
parameter value must match
the controlPlane.platform
parameter value.
metadata.name The name of your cluster. A string that contains uppercase or lowercase letters,
such as dev.
platform. The region to deploy your A valid region for your cloud, such as us-east-1 for
<platform>.regi cluster in. AWS, centralus for Azure. Red Hat OpenStack
on Platform (RHOSP) does not use this parameter.
77
OpenShift Container Platform 4.4 Installing on AWS
sshKey The SSH key to use to access your A valid, local public SSH key that you
cluster machines. added to the ssh-agent process.
NOTE
For production
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.
78
CHAPTER 1. INSTALLING ON AWS
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
controlPlane.replica The number of control plane machines A positive integer greater than or equal
s to provision. to 3. The default value is 3.
79
OpenShift Container Platform 4.4 Installing on AWS
compute.platfor The size in GiB of the root Integer, for example 500.
m.aws.rootVolu volume.
me.size
compute.platfor The instance type of the root Valid AWS EBS instance type, such as io1.
m.aws.rootVolu volume.
me.type
compute.platfor The EC2 instance type for the Valid AWS instance type, such as c5.9xlarge .
m.aws.type compute machines.
compute.platfor The availability zones where A list of valid AWS availability zones, such as us-
m.aws.zones the installation program east-1c, in a YAML sequence.
creates machines for the
compute MachinePool. If you
provide your own VPC, you
must provide a subnet in that
availability zone.
compute.aws.re The AWS region that the Valid AWS region, such as us-east-1.
gion installation program creates
compute resources in.
controlPlane.pla The EC2 instance type for the Valid AWS instance type, such as c5.9xlarge .
tform.aws.type control plane machines.
controlPlane.pla The availability zones where A list of valid AWS availability zones, such as us-
tform.aws.zone the installation program east-1c, in a YAML sequence.
s creates machines for the
control plane MachinePool.
controlPlane.aw The AWS region that the Valid AWS region, such as us-east-1.
s.region installation program creates
control plane resources in.
platform.aws.us A map of keys and values that Any valid YAML map, such as key value pairs in the
erTags the installation program adds <key>: <value> format. For more information
as tags to all resources that it about AWS tags, see Tagging Your Amazon EC2
creates. Resources in the AWS documentation.
80
CHAPTER 1. INSTALLING ON AWS
You can customize the install-config.yaml file to specify more details about your OpenShift Container
Platform cluster’s platform or modify the values of the required parameters.
IMPORTANT
This sample YAML file is provided for reference only. You must obtain your install-
config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com 1
controlPlane: 2
hyperthreading: Enabled 3 4
name: master
platform:
aws:
zones:
- us-west-2a
- us-west-2b
rootVolume:
iops: 4000
size: 500
type: io1
type: m5.xlarge 5
replicas: 3
compute: 6
- hyperthreading: Enabled 7
name: worker
platform:
aws:
rootVolume:
iops: 2000
size: 500
81
OpenShift Container Platform 4.4 Installing on AWS
type: io1 8
type: c5.4xlarge
zones:
- us-west-2c
replicas: 3
metadata:
name: test-cluster 9
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
aws:
region: us-west-2 10
userTags:
adminContact: jdoe
costCenter: 7536
subnets: 11
- subnet-1
- subnet-2
- subnet-3
pullSecret: '{"auths": ...}' 12
fips: false 13
sshKey: ssh-ed25519 AAAA... 14
publish: Internal 15
2 6 If you do not provide these parameters and values, the installation program provides the default
value.
3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings.
To meet the requirements of the different data structures, the first line of the compute section
must begin with a hyphen, -, and the first line of the controlPlane section must not. Although both
sections currently define a single machine pool, it is possible that future versions of OpenShift
Container Platform will support defining multiple compute pools during installation. Only one
control plane pool is used.
IMPORTANT
82
CHAPTER 1. INSTALLING ON AWS
8 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and
set iops to 2000.
11 If you provide your own VPC, specify subnets for each availability zone that your cluster uses.
13 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is
enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container
Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography
modules that are provided with RHCOS instead.
14 You can optionally provide the sshKey value that you use to access the machines in your cluster.
NOTE
For production OpenShift Container Platform clusters on which you want to perform
installation debugging or disaster recovery, specify an SSH key that your ssh-agent
process uses.
15 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a
private cluster, which cannot be accessed from the internet. The default value is External.
Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS
proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by
configuring the proxy settings in the install-config.yaml file.
Prerequisites
Review the sites that your cluster requires access to and determine whether any need to bypass
the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider
APIs. Add sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.
NOTE
Procedure
1. Edit your install-config.yaml file and add the proxy settings. For example:
apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> 1
httpsProxy: http://<username>:<pswd>@<ip>:<port> 2
noProxy: example.com 3
additionalTrustBundle: | 4
83
OpenShift Container Platform 4.4 Installing on AWS
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
...
1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme
must be http. If you use an MITM transparent proxy network that does not require
additional proxy configuration but requires additional CAs, you must not specify an
httpProxy value.
2 A proxy URL to use for creating HTTPS connections outside the cluster. If this field is not
specified, then httpProxy is used for both HTTP and HTTPS connections. The URL
scheme must be http; https is currently not supported. If you use an MITM transparent
proxy network that does not require additional proxy configuration but requires additional
CAs, you must not specify an httpsProxy value.
NOTE
The installation program does not support the proxy readinessEndpoints field.
2. Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings
in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still
created, but it will have a nil spec.
NOTE
Only the Proxy object named cluster is supported, and no additional proxies can be
created.
IMPORTANT
You can run the create cluster command of the installation program only once, during
initial installation.
84
CHAPTER 1. INSTALLING ON AWS
Prerequisites
Configure an account with the cloud platform that hosts your cluster.
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
2 To view different installation details, specify warn, debug, or error instead of info.
NOTE
If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.
IMPORTANT
The Ignition config files that the installation program generates contain
certificates that expire after 24 hours. You must keep the cluster running for 24
hours in a non-degraded state to ensure that the first certificate rotation has
finished.
IMPORTANT
You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.
IMPORTANT
If you installed an earlier version of oc, you cannot use it to complete all of the commands
in OpenShift Container Platform 4.4. Download and install the new version of oc.
Procedure
1. From the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site, navigate
85
OpenShift Container Platform 4.4 Installing on AWS
1. From the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site, navigate
to the page for your installation type and click Download Command-line Tools.
2. Click the folder for your operating system and architecture and click the compressed file.
NOTE
$ oc <command>
Prerequisites
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2. Verify you can run oc commands successfully using the exported configuration:
$ oc whoami
system:admin
Next steps
86
CHAPTER 1. INSTALLING ON AWS
One way to create this infrastructure is to use the provided CloudFormation templates. You can modify
the templates to customize your infrastructure or use the information that they contain to create AWS
objects according to your company’s policies.
Prerequisites
Review details about the OpenShift Container Platform installation and update processes.
IMPORTANT
If you have an AWS profile stored on your computer, it must not use a temporary
session token that you generated while using a multi-factor authentication
device. The cluster continues to use your current AWS credentials to create AWS
resources for the entire life of the cluster, so you must use key-based, long-lived
credentials. To generate appropriate keys, see Managing Access Keys for IAM
Users in the AWS documentation. You can supply the keys when you run the
installation program.
Download the AWS CLI and install it on your computer. See Install the AWS CLI Using the
Bundled Installer (Linux, macOS, or Unix) in the AWS documentation.
If you use a firewall, you must configure it to allow the sites that your cluster requires access to.
NOTE
Be sure to also review this site list if you are configuring a proxy.
Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management and entitlement. If the cluster has internet access and you
do not disable Telemetry, that service automatically entitles your cluster. If the Telemetry
service cannot entitle your cluster, you must manually entitle it on the Cluster registration page.
Access Quay.io to obtain the packages that are required to install your cluster.
87
OpenShift Container Platform 4.4 Installing on AWS
IMPORTANT
If your cluster cannot have direct internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
internet access. Before you update the cluster, you update the content of the mirror
registry.
For more information about the integration testing for different platforms, see the OpenShift Container
Platform 4.x Tested Integrations page.
You can use the provided CloudFormation templates to create this infrastructure, you can manually
create the components, or you can reuse existing infrastructure that meets the cluster requirements.
Review the CloudFormation templates for more details about how the components interrelate.
A bootstrap machine. This machine is required during installation, but you can remove it after
your cluster deploys.
At least three control plane machines. The control plane machines are not governed by a
MachineSet.
Compute machines. You must create at least two compute machines, which are also known as
worker machines, during installation. These machines are not governed by a MachineSet.
You can use the following instance types for the cluster machines with the provided CloudFormation
templates.
IMPORTANT
If m4 instance types are not available in your region, such as with eu-west-3, use m5
types instead.
i3.large x
m4.large or m5.large x
m4.xlarge or x x
m5.xlarge
88
CHAPTER 1. INSTALLING ON AWS
m4.2xlarge x x
m4.4xlarge x x
m4.8xlarge x x
m4.10xlarge x x
m4.16xlarge x x
c4.large x
c4.xlarge x
c4.2xlarge x x
c4.4xlarge x x
c4.8xlarge x x
r4.large x
r4.xlarge x x
r4.2xlarge x x
r4.4xlarge x x
r4.8xlarge x x
r4.16xlarge x x
You might be able to use other instance types that meet the specifications of these instance types.
Because your cluster has limited access to automatic machine management when you use infrastructure
that you provision, you must provide a mechanism for approving cluster certificate signing requests
(CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The
machine-approver cannot guarantee the validity of a serving certificate that is requested by using
kubelet credentials because it cannot confirm that the correct machine issued the request. You must
determine and implement a method of verifying the validity of the kubelet serving certificate requests
and approving them.
89
OpenShift Container Platform 4.4 Installing on AWS
A VPC
DNS entries
Security groups
IAM roles
S3 buckets
AWS::EC2::NatGateway
AWS::EC2::EIP
90
CHAPTER 1. INSTALLING ON AWS
80 Inbound HTTP
traffic
22 Inbound SSH
traffic
0 - 65535 Outbound
ephemeral traffic
The cluster also requires load balancers and listeners for port 6443, which are required for the
Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for
new machines. The targets will be the master nodes. Port 6443 must be accessible to both clients
external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the
cluster.
etcd record AWS::Route The registration records for etcd for your control plane machines.
sets 53::RecordS
et
91
OpenShift Container Platform 4.4 Installing on AWS
Public load AWS::Elastic The load balancer for your public subnets.
balancer LoadBalanci
ngV2::LoadB
alancer
External API AWS::Route Alias records for the external API server.
server record 53::RecordS
etGroup
External AWS::Elastic A listener on port 6443 for the external load balancer.
listener LoadBalanci
ngV2::Listen
er
External target AWS::Elastic The target group for the external load balancer.
group LoadBalanci
ngV2::Target
Group
Private load AWS::Elastic The load balancer for your private subnets.
balancer LoadBalanci
ngV2::LoadB
alancer
Internal API AWS::Route Alias records for the internal API server.
server record 53::RecordS
etGroup
Internal listener AWS::Elastic A listener on port 22623 for the internal load balancer.
LoadBalanci
ngV2::Listen
er
Internal target AWS::Elastic The target group for the Internal load balancer.
group LoadBalanci
ngV2::Target
Group
Internal listener AWS::Elastic A listener on port 6443 for the internal load balancer.
LoadBalanci
ngV2::Listen
er
Internal target AWS::Elastic The target group for the internal load balancer.
group LoadBalanci
ngV2::Target
Group
92
CHAPTER 1. INSTALLING ON AWS
Security groups
The control plane and worker machines require access to the following ports:
tcp 6443
tcp 22623
93
OpenShift Container Platform 4.4 Installing on AWS
Worker Ingress
The worker machines require the following Ingress groups. Each Ingress group is a
AWS::EC2::SecurityGroupIngress resource.
94
CHAPTER 1. INSTALLING ON AWS
You must grant the machines permissions in AWS. The provided CloudFormation templates grant the
machines permission the following AWS::IAM::Role objects and provide a AWS::IAM::InstanceProfile
for each set of roles. If you do not use the templates, you can grant the machines the following broad
permissions or the following individual permissions.
Allow elasticloadbalancing *
:*
Allow iam:PassRole *
Allow s3:GetObject *
Allow ec2:AttachVolume *
Allow ec2:DetachVolume *
When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web
Services (AWS), you grant that user all of the required permissions. To deploy all components of an
OpenShift Container Platform cluster, the IAM user requires the following permissions:
ec2:AllocateAddress
ec2:AssociateAddress
ec2:AuthorizeSecurityGroupEgress
ec2:AuthorizeSecurityGroupIngress
ec2:CopyImage
ec2:CreateNetworkInterface
ec2:AttachNetworkInterface
ec2:CreateSecurityGroup
ec2:CreateTags
ec2:CreateVolume
95
OpenShift Container Platform 4.4 Installing on AWS
ec2:DeleteSecurityGroup
ec2:DeleteSnapshot
ec2:DeregisterImage
ec2:DescribeAccountAttributes
ec2:DescribeAddresses
ec2:DescribeAvailabilityZones
ec2:DescribeDhcpOptions
ec2:DescribeImages
ec2:DescribeInstanceAttribute
ec2:DescribeInstanceCreditSpecifications
ec2:DescribeInstances
ec2:DescribeInternetGateways
ec2:DescribeKeyPairs
ec2:DescribeNatGateways
ec2:DescribeNetworkAcls
ec2:DescribeNetworkInterfaces
ec2:DescribePrefixLists
ec2:DescribeRegions
ec2:DescribeRouteTables
ec2:DescribeSecurityGroups
ec2:DescribeSubnets
ec2:DescribeTags
ec2:DescribeVolumes
ec2:DescribeVpcAttribute
ec2:DescribeVpcClassicLink
ec2:DescribeVpcClassicLinkDnsSupport
ec2:DescribeVpcEndpoints
ec2:DescribeVpcs
ec2:ModifyInstanceAttribute
96
CHAPTER 1. INSTALLING ON AWS
ec2:ModifyNetworkInterfaceAttribute
ec2:ReleaseAddress
ec2:RevokeSecurityGroupEgress
ec2:RevokeSecurityGroupIngress
ec2:RunInstances
ec2:TerminateInstances
ec2:AssociateDhcpOptions
ec2:AssociateRouteTable
ec2:AttachInternetGateway
ec2:CreateDhcpOptions
ec2:CreateInternetGateway
ec2:CreateNatGateway
ec2:CreateRoute
ec2:CreateRouteTable
ec2:CreateSubnet
ec2:CreateVpc
ec2:CreateVpcEndpoint
ec2:ModifySubnetAttribute
ec2:ModifyVpcAttribute
NOTE
If you use an existing VPC, your account does not require these permissions for creating
network resources.
elasticloadbalancing:AddTags
elasticloadbalancing:ApplySecurityGroupsToLoadBalancer
elasticloadbalancing:AttachLoadBalancerToSubnets
elasticloadbalancing:ConfigureHealthCheck
elasticloadbalancing:CreateListener
97
OpenShift Container Platform 4.4 Installing on AWS
elasticloadbalancing:CreateLoadBalancer
elasticloadbalancing:CreateLoadBalancerListeners
elasticloadbalancing:CreateTargetGroup
elasticloadbalancing:DeleteLoadBalancer
elasticloadbalancing:DeregisterInstancesFromLoadBalancer
elasticloadbalancing:DeregisterTargets
elasticloadbalancing:DescribeInstanceHealth
elasticloadbalancing:DescribeListeners
elasticloadbalancing:DescribeLoadBalancerAttributes
elasticloadbalancing:DescribeLoadBalancers
elasticloadbalancing:DescribeTags
elasticloadbalancing:DescribeTargetGroupAttributes
elasticloadbalancing:DescribeTargetHealth
elasticloadbalancing:ModifyLoadBalancerAttributes
elasticloadbalancing:ModifyTargetGroup
elasticloadbalancing:ModifyTargetGroupAttributes
elasticloadbalancing:RegisterInstancesWithLoadBalancer
elasticloadbalancing:RegisterTargets
elasticloadbalancing:SetLoadBalancerPoliciesOfListener
iam:AddRoleToInstanceProfile
iam:CreateInstanceProfile
iam:CreateRole
iam:DeleteInstanceProfile
iam:DeleteRole
iam:DeleteRolePolicy
iam:GetInstanceProfile
iam:GetRole
iam:GetRolePolicy
98
CHAPTER 1. INSTALLING ON AWS
iam:GetUser
iam:ListInstanceProfilesForRole
iam:ListRoles
iam:ListUsers
iam:PassRole
iam:PutRolePolicy
iam:RemoveRoleFromInstanceProfile
iam:SimulatePrincipalPolicy
iam:TagRole
route53:ChangeResourceRecordSets
route53:ChangeTagsForResource
route53:CreateHostedZone
route53:DeleteHostedZone
route53:GetChange
route53:GetHostedZone
route53:ListHostedZones
route53:ListHostedZonesByName
route53:ListResourceRecordSets
route53:ListTagsForResource
route53:UpdateHostedZoneComment
s3:CreateBucket
s3:DeleteBucket
s3:GetAccelerateConfiguration
s3:GetBucketCors
s3:GetBucketLocation
s3:GetBucketLogging
s3:GetBucketObjectLockConfiguration
99
OpenShift Container Platform 4.4 Installing on AWS
s3:GetBucketReplication
s3:GetBucketRequestPayment
s3:GetBucketTagging
s3:GetBucketVersioning
s3:GetBucketWebsite
s3:GetEncryptionConfiguration
s3:GetLifecycleConfiguration
s3:GetReplicationConfiguration
s3:ListBucket
s3:PutBucketAcl
s3:PutBucketTagging
s3:PutEncryptionConfiguration
s3:DeleteObject
s3:GetObject
s3:GetObjectAcl
s3:GetObjectTagging
s3:GetObjectVersion
s3:PutObject
s3:PutObjectAcl
s3:PutObjectTagging
autoscaling:DescribeAutoScalingGroups
ec2:DeleteNetworkInterface
ec2:DeleteVolume
elasticloadbalancing:DeleteTargetGroup
elasticloadbalancing:DescribeTargetGroups
iam:ListInstanceProfiles
iam:ListRolePolicies
100
CHAPTER 1. INSTALLING ON AWS
iam:ListUserPolicies
s3:DeleteObject
tag:GetResources
ec2:DeleteDhcpOptions
ec2:DeleteInternetGateway
ec2:DeleteNatGateway
ec2:DeleteRoute
ec2:DeleteRouteTable
ec2:DeleteSubnet
ec2:DeleteVpc
ec2:DeleteVpcEndpoints
ec2:DetachInternetGateway
ec2:DisassociateRouteTable
ec2:ReplaceRouteTableAssociation
NOTE
If you use an existing VPC, your account does not require these permissions to delete
network resources.
iam:CreateAccessKey
iam:CreateUser
iam:DeleteAccessKey
iam:DeleteUser
iam:DeleteUserPolicy
iam:GetUserPolicy
iam:ListAccessKeys
iam:PutUserPolicy
iam:TagUser
iam:GetUserPolicy
101
OpenShift Container Platform 4.4 Installing on AWS
iam:ListAccessKeys
s3:PutBucketPublicAccessBlock
s3:GetBucketPublicAccessBlock
s3:PutLifecycleConfiguration
s3:HeadBucket
s3:ListBucketMultipartUploads
s3:AbortMultipartUpload
Prerequisites
A computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.
2. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.
IMPORTANT
The installation program creates several files on the computer that you use to
install your cluster. You must keep both the installation program and the files
that the installation program creates after you finish installing the cluster.
3. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:
4. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.
NOTE
102
CHAPTER 1. INSTALLING ON AWS
NOTE
You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.
NOTE
You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.
Procedure
1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:
1 Specify the path and file name, such as ~/.ssh/id_rsa, of the SSH key.
Running this command generates an SSH key that does not require a password in the location
that you specified.
IMPORTANT
If you create a new SSH key pair, avoid overwriting existing SSH keys.
$ ssh-add <path>/<file_name> 1
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installation
program. If you install a cluster on infrastructure that you provision, you must provide this key to
your cluster’s machines.
103
OpenShift Container Platform 4.4 Installing on AWS
To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned
infrastructure, you must generate the files that the installation program needs to deploy your cluster
and modify them so that the cluster creates only the machines that it will use. You generate and
customize the install-config.yaml file, Kubernetes manifests, and Ignition config files.
Generate and customize the installation configuration file that the installation program needs to deploy
your cluster.
Prerequisites
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.
IMPORTANT
NOTE
iii. If you do not have an AWS profile stored on your computer, enter the AWS access key
ID and secret access key for the user that you configured to run the installation
program.
104
CHAPTER 1. INSTALLING ON AWS
v. Select the base domain for the Route53 service that you configured for your cluster.
vii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.
2. Edit the install-config.yaml file to set the number of compute replicas, which are also known as
worker replicas, to 0, as shown in the following compute stanza:
compute:
- hyperthreading: Enabled
name: worker
platform: {}
replicas: 0
IMPORTANT
Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS
proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by
configuring the proxy settings in the install-config.yaml file.
Prerequisites
Review the sites that your cluster requires access to and determine whether any need to bypass
the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider
APIs. Add sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.
NOTE
Procedure
1. Edit your install-config.yaml file and add the proxy settings. For example:
apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> 1
httpsProxy: http://<username>:<pswd>@<ip>:<port> 2
noProxy: example.com 3
105
OpenShift Container Platform 4.4 Installing on AWS
additionalTrustBundle: | 4
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
...
1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme
must be http. If you use an MITM transparent proxy network that does not require
additional proxy configuration but requires additional CAs, you must not specify an
httpProxy value.
2 A proxy URL to use for creating HTTPS connections outside the cluster. If this field is not
specified, then httpProxy is used for both HTTP and HTTPS connections. The URL
scheme must be http; https is currently not supported. If you use an MITM transparent
proxy network that does not require additional proxy configuration but requires additional
CAs, you must not specify an httpsProxy value.
NOTE
The installation program does not support the proxy readinessEndpoints field.
2. Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings
in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still
created, but it will have a nil spec.
NOTE
Only the Proxy object named cluster is supported, and no additional proxies can be
created.
Because you must modify some cluster definition files and manually start the cluster machines, you must
generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines.
IMPORTANT
106
CHAPTER 1. INSTALLING ON AWS
IMPORTANT
The Ignition config files that the installation program generates contain certificates that
expire after 24 hours. You must complete your cluster installation and keep the cluster
running for 24 hours in a non-degraded state to ensure that the first certificate rotation
has finished.
Prerequisites
Procedure
1 For <installation_directory>, specify the installation directory that contains the install-
config.yaml file you created.
Because you create your own compute machines later in the installation process, you can safely
ignore this warning.
2. Remove the Kubernetes manifest files that define the control plane machines:
$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml
By removing these files, you prevent the cluster from automatically generating control plane
machines.
3. Remove the Kubernetes manifest files that define the worker machines:
$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml
Because you create and manage the worker machines yourself, you do not need to initialize
these machines.
NOTE
107
OpenShift Container Platform 4.4 Installing on AWS
NOTE
5. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove
the privateZone and publicZone sections from the
<installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file:
apiVersion: config.openshift.io/v1
kind: DNS
metadata:
creationTimestamp: null
name: cluster
spec:
baseDomain: example.openshift.com
privateZone: 1
id: mycluster-100419-private-zone
publicZone: 2
id: example.openshift.com
status: {}
If you do so, you must add ingress DNS records manually in a later step.
.
├── auth
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign
Prerequisites
Obtain the OpenShift Container Platform installation program and the pull secret for your
108
CHAPTER 1. INSTALLING ON AWS
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
To extract and view the infrastructure name from the Ignition config file metadata, run the
following command:
$ jq -r .infraID /<installation_directory>/metadata.json 1
openshift-vw9j6 2
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2 The output of this command is your cluster name and a random string.
NOTE
If you do not use the provided CloudFormation template to create your AWS
infrastructure, you must review the provided information and manually create the
infrastructure. If your cluster does not initialize correctly, you might have to contact Red
Hat support with your installation logs.
Prerequisites
Procedure
1. Create a JSON file that contains the parameter values that the template requires:
[
{
"ParameterKey": "VpcCidr", 1
"ParameterValue": "10.0.0.0/16" 2
},
{
"ParameterKey": "AvailabilityZoneCount", 3
"ParameterValue": "1" 4
},
{
109
OpenShift Container Platform 4.4 Installing on AWS
"ParameterKey": "SubnetBits", 5
"ParameterValue": "12" 6
}
]
2. Copy the template from the CloudFormation template for the VPCsection of this topic and
save it as a YAML file on your computer. This template describes the VPC that your cluster
requires.
IMPORTANT
1 <name> is the name for the CloudFormation stack, such as cluster-vpc. You need the
name of this stack if you remove the cluster.
2 <template> is the relative path to and name of the CloudFormation template YAML file
that you saved.
3 <parameters> is the relative path to and name of the CloudFormation parameters JSON
file.
After the StackStatus displays CREATE_COMPLETE, the output displays values for the
following parameters. You must provide these parameter values to the other CloudFormation
templates that you run to create your cluster:
110
CHAPTER 1. INSTALLING ON AWS
You can use the following CloudFormation template to deploy the VPC that you need for your
OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09
Description: Template for Best Practice VPC with 1-3 AZs
Parameters:
VpcCidr:
AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4]
[0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24.
Default: 10.0.0.0/16
Description: CIDR block for VPC.
Type: String
AvailabilityZoneCount:
ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)"
MinValue: 1
MaxValue: 3
Default: 1
Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)"
Type: Number
SubnetBits:
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27.
MinValue: 5
MaxValue: 13
Default: 12
Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 =
/19)"
Type: Number
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Network Configuration"
Parameters:
- VpcCidr
- SubnetBits
- Label:
default: "Availability Zones"
Parameters:
- AvailabilityZoneCount
ParameterLabels:
AvailabilityZoneCount:
default: "Availability Zone Count"
VpcCidr:
111
OpenShift Container Platform 4.4 Installing on AWS
Conditions:
DoAz3: !Equals [3, !Ref AvailabilityZoneCount]
DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3]
Resources:
VPC:
Type: "AWS::EC2::VPC"
Properties:
EnableDnsSupport: "true"
EnableDnsHostnames: "true"
CidrBlock: !Ref VpcCidr
PublicSubnet:
Type: "AWS::EC2::Subnet"
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
-0
- Fn::GetAZs: !Ref "AWS::Region"
PublicSubnet2:
Type: "AWS::EC2::Subnet"
Condition: DoAz2
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
-1
- Fn::GetAZs: !Ref "AWS::Region"
PublicSubnet3:
Type: "AWS::EC2::Subnet"
Condition: DoAz3
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
-2
- Fn::GetAZs: !Ref "AWS::Region"
InternetGateway:
Type: "AWS::EC2::InternetGateway"
GatewayToInternet:
Type: "AWS::EC2::VPCGatewayAttachment"
Properties:
VpcId: !Ref VPC
InternetGatewayId: !Ref InternetGateway
PublicRouteTable:
Type: "AWS::EC2::RouteTable"
Properties:
VpcId: !Ref VPC
PublicRoute:
Type: "AWS::EC2::Route"
DependsOn: GatewayToInternet
Properties:
112
CHAPTER 1. INSTALLING ON AWS
113
OpenShift Container Platform 4.4 Installing on AWS
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NAT
PrivateSubnet2:
Type: "AWS::EC2::Subnet"
Condition: DoAz2
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
-1
- Fn::GetAZs: !Ref "AWS::Region"
PrivateRouteTable2:
Type: "AWS::EC2::RouteTable"
Condition: DoAz2
Properties:
VpcId: !Ref VPC
PrivateSubnetRouteTableAssociation2:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Condition: DoAz2
Properties:
SubnetId: !Ref PrivateSubnet2
RouteTableId: !Ref PrivateRouteTable2
NAT2:
DependsOn:
- GatewayToInternet
Type: "AWS::EC2::NatGateway"
Condition: DoAz2
Properties:
AllocationId:
"Fn::GetAtt":
- EIP2
- AllocationId
SubnetId: !Ref PublicSubnet2
EIP2:
Type: "AWS::EC2::EIP"
Condition: DoAz2
Properties:
Domain: vpc
Route2:
Type: "AWS::EC2::Route"
Condition: DoAz2
Properties:
RouteTableId:
Ref: PrivateRouteTable2
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NAT2
PrivateSubnet3:
Type: "AWS::EC2::Subnet"
Condition: DoAz3
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
-2
114
CHAPTER 1. INSTALLING ON AWS
115
OpenShift Container Platform 4.4 Installing on AWS
- - com.amazonaws.
- !Ref 'AWS::Region'
- .s3
VpcId: !Ref VPC
Outputs:
VpcId:
Description: ID of the new VPC.
Value: !Ref VPC
PublicSubnetIds:
Description: Subnet IDs of the public subnets.
Value:
!Join [
",",
[!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref
PublicSubnet3, !Ref "AWS::NoValue"]]
]
PrivateSubnetIds:
Description: Subnet IDs of the private subnets.
Value:
!Join [
",",
[!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref
PrivateSubnet3, !Ref "AWS::NoValue"]]
]
You can run the template multiple times within a single VPC.
NOTE
If you do not use the provided CloudFormation template to create your AWS
infrastructure, you must review the provided information and manually create the
infrastructure. If your cluster does not initialize correctly, you might have to contact Red
Hat support with your installation logs.
Prerequisites
Procedure
1. Obtain the Hosted Zone ID for the Route53 zone that you specified in the install-config.yaml
file for your cluster. You can obtain this ID from the AWS console or by running the following
command:
IMPORTANT
116
CHAPTER 1. INSTALLING ON AWS
IMPORTANT
1 For the <route53_domain>, specify the Route53 base domain that you used when you
generated the install-config.yaml file for the cluster.
2. Create a JSON file that contains the parameter values that the template requires:
[
{
"ParameterKey": "ClusterName", 1
"ParameterValue": "mycluster" 2
},
{
"ParameterKey": "InfrastructureName", 3
"ParameterValue": "mycluster-<random_string>" 4
},
{
"ParameterKey": "HostedZoneId", 5
"ParameterValue": "<random_string>" 6
},
{
"ParameterKey": "HostedZoneName", 7
"ParameterValue": "example.com" 8
},
{
"ParameterKey": "PublicSubnets", 9
"ParameterValue": "subnet-<random_string>" 10
},
{
"ParameterKey": "PrivateSubnets", 11
"ParameterValue": "subnet-<random_string>" 12
},
{
"ParameterKey": "VpcId", 13
"ParameterValue": "vpc-<random_string>" 14
}
]
2 Specify the cluster name that you used when you generated the install-config.yaml file
for the cluster.
3 The name for your cluster infrastructure that is encoded in your Ignition config files for the
cluster.
117
OpenShift Container Platform 4.4 Installing on AWS
4 Specify the infrastructure name that you extracted from the Ignition config file metadata,
which has the format <cluster-name>-<random-string>.
6 Specify the Route53 public zone ID, which as a format similar to Z21IXYZABCZ2A4. You
can obtain this value from the AWS console.
8 Specify the Route53 base domain that you used when you generated the install-
config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the
AWS console.
10 Specify the PublicSubnetIds value from the output of the CloudFormation template for
the VPC.
12 Specify the PrivateSubnetIds value from the output of the CloudFormation template for
the VPC.
14 Specify the VpcId value from the output of the CloudFormation template for the VPC.
3. Copy the template from the CloudFormation template for the network and load balancers
section of this topic and save it as a YAML file on your computer. This template describes the
networking and load balancing objects that your cluster requires.
IMPORTANT
1 <name> is the name for the CloudFormation stack, such as cluster-dns. You need the
name of this stack if you remove the cluster.
2 <template> is the relative path to and name of the CloudFormation template YAML file
that you saved.
3 <parameters> is the relative path to and name of the CloudFormation parameters JSON
file.
118
CHAPTER 1. INSTALLING ON AWS
After the StackStatus displays CREATE_COMPLETE, the output displays values for the
following parameters. You must provide these parameter values to the other CloudFormation
templates that you run to create your cluster:
RegisterN Lambda ARN useful to help register/deregister IP targets for these load balancers.
lbIpTarget
sLambda
You can use the following CloudFormation template to deploy the networking objects and load
balancers that you need for your OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Network Elements (Route53 & LBs)
Parameters:
ClusterName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
119
OpenShift Container Platform 4.4 Installing on AWS
ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a
maximum of 27 characters.
Description: A short, representative cluster name to use for host names and other identifying
names.
Type: String
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a
maximum of 27 characters.
Description: A short, unique cluster ID used to tag cloud resources and identify items owned or
used by the cluster.
Type: String
HostedZoneId:
Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4.
Type: String
HostedZoneName:
Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing
period.
Type: String
Default: "example.com"
PublicSubnets:
Description: The internet-facing subnets.
Type: List<AWS::EC2::Subnet::Id>
PrivateSubnets:
Description: The internal subnets.
Type: List<AWS::EC2::Subnet::Id>
VpcId:
Description: The VPC-scoped resources will belong to this VPC.
Type: AWS::EC2::VPC::Id
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- ClusterName
- InfrastructureName
- Label:
default: "Network Configuration"
Parameters:
- VpcId
- PublicSubnets
- PrivateSubnets
- Label:
default: "DNS"
Parameters:
- HostedZoneName
- HostedZoneId
ParameterLabels:
ClusterName:
default: "Cluster Name"
InfrastructureName:
default: "Infrastructure Name"
120
CHAPTER 1. INSTALLING ON AWS
VpcId:
default: "VPC ID"
PublicSubnets:
default: "Public Subnets"
PrivateSubnets:
default: "Private Subnets"
HostedZoneName:
default: "Public Hosted Zone Name"
HostedZoneId:
default: "Public Hosted Zone ID"
Resources:
ExtApiElb:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: !Join ["-", [!Ref InfrastructureName, "ext"]]
IpAddressType: ipv4
Subnets: !Ref PublicSubnets
Type: network
IntApiElb:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: !Join ["-", [!Ref InfrastructureName, "int"]]
Scheme: internal
IpAddressType: ipv4
Subnets: !Ref PrivateSubnets
Type: network
IntDns:
Type: "AWS::Route53::HostedZone"
Properties:
HostedZoneConfig:
Comment: "Managed by CloudFormation"
Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]]
HostedZoneTags:
- Key: Name
Value: !Join ["-", [!Ref InfrastructureName, "int"]]
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "owned"
VPCs:
- VPCId: !Ref VpcId
VPCRegion: !Ref "AWS::Region"
ExternalApiServerRecord:
Type: AWS::Route53::RecordSetGroup
Properties:
Comment: Alias record for the API server
HostedZoneId: !Ref HostedZoneId
RecordSets:
- Name:
!Join [
".",
["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]],
]
Type: A
121
OpenShift Container Platform 4.4 Installing on AWS
AliasTarget:
HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID
DNSName: !GetAtt ExtApiElb.DNSName
InternalApiServerRecord:
Type: AWS::Route53::RecordSetGroup
Properties:
Comment: Alias record for the API server
HostedZoneId: !Ref IntDns
RecordSets:
- Name:
!Join [
".",
["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]],
]
Type: A
AliasTarget:
HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID
DNSName: !GetAtt IntApiElb.DNSName
- Name:
!Join [
".",
["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]],
]
Type: A
AliasTarget:
HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID
DNSName: !GetAtt IntApiElb.DNSName
ExternalApiListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: forward
TargetGroupArn:
Ref: ExternalApiTargetGroup
LoadBalancerArn:
Ref: ExtApiElb
Port: 6443
Protocol: TCP
ExternalApiTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
Port: 6443
Protocol: TCP
TargetType: ip
VpcId:
Ref: VpcId
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: 60
InternalApiListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
122
CHAPTER 1. INSTALLING ON AWS
DefaultActions:
- Type: forward
TargetGroupArn:
Ref: InternalApiTargetGroup
LoadBalancerArn:
Ref: IntApiElb
Port: 6443
Protocol: TCP
InternalApiTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
Port: 6443
Protocol: TCP
TargetType: ip
VpcId:
Ref: VpcId
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: 60
InternalServiceInternalListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: forward
TargetGroupArn:
Ref: InternalServiceTargetGroup
LoadBalancerArn:
Ref: IntApiElb
Port: 22623
Protocol: TCP
InternalServiceTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
Port: 22623
Protocol: TCP
TargetType: ip
VpcId:
Ref: VpcId
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: 60
RegisterTargetLambdaIamRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]]
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "lambda.amazonaws.com"
123
OpenShift Container Platform 4.4 Installing on AWS
Action:
- "sts:AssumeRole"
Path: "/"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]]
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
[
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeregisterTargets",
]
Resource: !Ref InternalApiTargetGroup
- Effect: "Allow"
Action:
[
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeregisterTargets",
]
Resource: !Ref InternalServiceTargetGroup
- Effect: "Allow"
Action:
[
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeregisterTargets",
]
Resource: !Ref ExternalApiTargetGroup
RegisterNlbIpTargets:
Type: "AWS::Lambda::Function"
Properties:
Handler: "index.handler"
Role:
Fn::GetAtt:
- "RegisterTargetLambdaIamRole"
- "Arn"
Code:
ZipFile: |
import json
import boto3
import cfnresponse
def handler(event, context):
elb = boto3.client('elbv2')
if event['RequestType'] == 'Delete':
elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=
[{'Id': event['ResourceProperties']['TargetIp']}])
elif event['RequestType'] == 'Create':
elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id':
event['ResourceProperties']['TargetIp']}])
responseData = {}
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData,
event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp'])
Runtime: "python3.7"
Timeout: 120
124
CHAPTER 1. INSTALLING ON AWS
RegisterSubnetTagsLambdaIamRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]]
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "lambda.amazonaws.com"
Action:
- "sts:AssumeRole"
Path: "/"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]]
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
[
"ec2:DeleteTags",
"ec2:CreateTags"
]
Resource: "arn:aws:ec2:*:*:subnet/*"
- Effect: "Allow"
Action:
[
"ec2:DescribeSubnets",
"ec2:DescribeTags"
]
Resource: "*"
RegisterSubnetTags:
Type: "AWS::Lambda::Function"
Properties:
Handler: "index.handler"
Role:
Fn::GetAtt:
- "RegisterSubnetTagsLambdaIamRole"
- "Arn"
Code:
ZipFile: |
import json
import boto3
import cfnresponse
def handler(event, context):
ec2_client = boto3.client('ec2')
if event['RequestType'] == 'Delete':
for subnet_id in event['ResourceProperties']['Subnets']:
ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' +
event['ResourceProperties']['InfrastructureName']}]);
elif event['RequestType'] == 'Create':
for subnet_id in event['ResourceProperties']['Subnets']:
125
OpenShift Container Platform 4.4 Installing on AWS
RegisterPublicSubnetTags:
Type: Custom::SubnetRegister
Properties:
ServiceToken: !GetAtt RegisterSubnetTags.Arn
InfrastructureName: !Ref InfrastructureName
Subnets: !Ref PublicSubnets
RegisterPrivateSubnetTags:
Type: Custom::SubnetRegister
Properties:
ServiceToken: !GetAtt RegisterSubnetTags.Arn
InfrastructureName: !Ref InfrastructureName
Subnets: !Ref PrivateSubnets
Outputs:
PrivateHostedZoneId:
Description: Hosted zone ID for the private DNS, which is required for private records.
Value: !Ref IntDns
ExternalApiLoadBalancerName:
Description: Full name of the External API load balancer created.
Value: !GetAtt ExtApiElb.LoadBalancerFullName
InternalApiLoadBalancerName:
Description: Full name of the Internal API load balancer created.
Value: !GetAtt IntApiElb.LoadBalancerFullName
ApiServerDnsName:
Description: Full hostname of the API server, which is required for the Ignition config files.
Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]]
RegisterNlbIpTargetsLambda:
Description: Lambda ARN useful to help register or deregister IP targets for these load balancers.
Value: !GetAtt RegisterNlbIpTargets.Arn
ExternalApiTargetGroupArn:
Description: ARN of External API target group.
Value: !Ref ExternalApiTargetGroup
InternalApiTargetGroupArn:
Description: ARN of Internal API target group.
Value: !Ref InternalApiTargetGroup
InternalServiceTargetGroupArn:
Description: ARN of internal service target group.
Value: !Ref InternalServiceTargetGroup
NOTE
126
CHAPTER 1. INSTALLING ON AWS
NOTE
If you do not use the provided CloudFormation template to create your AWS
infrastructure, you must review the provided information and manually create the
infrastructure. If your cluster does not initialize correctly, you might have to contact Red
Hat support with your installation logs.
Prerequisites
Procedure
1. Create a JSON file that contains the parameter values that the template requires:
[
{
"ParameterKey": "InfrastructureName", 1
"ParameterValue": "mycluster-<random_string>" 2
},
{
"ParameterKey": "VpcCidr", 3
"ParameterValue": "10.0.0.0/16" 4
},
{
"ParameterKey": "PrivateSubnets", 5
"ParameterValue": "subnet-<random_string>" 6
},
{
"ParameterKey": "VpcId", 7
"ParameterValue": "vpc-<random_string>" 8
}
]
1 The name for your cluster infrastructure that is encoded in your Ignition config files for the
cluster.
2 Specify the infrastructure name that you extracted from the Ignition config file metadata,
which has the format <cluster-name>-<random-string>.
4 Specify the CIDR block parameter that you used for the VPC that you defined in the form
x.x.x.x/16-24.
6 Specify the PrivateSubnetIds value from the output of the CloudFormation template for
the VPC.
127
OpenShift Container Platform 4.4 Installing on AWS
8 Specify the VpcId value from the output of the CloudFormation template for the VPC.
2. Copy the template from the CloudFormation template for security objectssection of this
topic and save it as a YAML file on your computer. This template describes the security groups
and roles that your cluster requires.
IMPORTANT
1 <name> is the name for the CloudFormation stack, such as cluster-sec. You need the
name of this stack if you remove the cluster.
2 <template> is the relative path to and name of the CloudFormation template YAML file
that you saved.
3 <parameters> is the relative path to and name of the CloudFormation parameters JSON
file.
After the StackStatus displays CREATE_COMPLETE, the output displays values for the
following parameters. You must provide these parameter values to the other CloudFormation
templates that you run to create your cluster:
128
CHAPTER 1. INSTALLING ON AWS
You can use the following CloudFormation template to deploy the security objects that you need for
your OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM)
Parameters:
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a
maximum of 27 characters.
Description: A short, unique cluster ID used to tag cloud resources and identify items owned or
used by the cluster.
Type: String
VpcCidr:
AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4]
[0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24.
Default: 10.0.0.0/16
Description: CIDR block for VPC.
Type: String
VpcId:
Description: The VPC-scoped resources will belong to this VPC.
Type: AWS::EC2::VPC::Id
PrivateSubnets:
Description: The internal subnets.
Type: List<AWS::EC2::Subnet::Id>
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- InfrastructureName
- Label:
default: "Network Configuration"
Parameters:
- VpcId
- VpcCidr
- PrivateSubnets
ParameterLabels:
InfrastructureName:
default: "Infrastructure Name"
VpcId:
default: "VPC ID"
VpcCidr:
default: "VPC CIDR"
PrivateSubnets:
default: "Private Subnets"
Resources:
129
OpenShift Container Platform 4.4 Installing on AWS
MasterSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Cluster Master Security Group
SecurityGroupIngress:
- IpProtocol: icmp
FromPort: 0
ToPort: 0
CidrIp: !Ref VpcCidr
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: !Ref VpcCidr
- IpProtocol: tcp
ToPort: 6443
FromPort: 6443
CidrIp: !Ref VpcCidr
- IpProtocol: tcp
FromPort: 22623
ToPort: 22623
CidrIp: !Ref VpcCidr
VpcId: !Ref VpcId
WorkerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Cluster Worker Security Group
SecurityGroupIngress:
- IpProtocol: icmp
FromPort: 0
ToPort: 0
CidrIp: !Ref VpcCidr
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: !Ref VpcCidr
VpcId: !Ref VpcId
MasterIngressEtcd:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: etcd
FromPort: 2379
ToPort: 2380
IpProtocol: tcp
MasterIngressVxlan:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Vxlan packets
FromPort: 4789
ToPort: 4789
130
CHAPTER 1. INSTALLING ON AWS
IpProtocol: udp
MasterIngressWorkerVxlan:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Vxlan packets
FromPort: 4789
ToPort: 4789
IpProtocol: udp
MasterIngressInternal:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: tcp
MasterIngressWorkerInternal:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: tcp
MasterIngressKube:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Kubernetes kubelet, scheduler and controller manager
FromPort: 10250
ToPort: 10259
IpProtocol: tcp
MasterIngressWorkerKube:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes kubelet, scheduler and controller manager
FromPort: 10250
ToPort: 10259
IpProtocol: tcp
MasterIngressIngressServices:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
131
OpenShift Container Platform 4.4 Installing on AWS
MasterIngressWorkerIngressServices:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: tcp
WorkerIngressVxlan:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Vxlan packets
FromPort: 4789
ToPort: 4789
IpProtocol: udp
WorkerIngressWorkerVxlan:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Vxlan packets
FromPort: 4789
ToPort: 4789
IpProtocol: udp
WorkerIngressInternal:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: tcp
WorkerIngressWorkerInternal:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: tcp
132
CHAPTER 1. INSTALLING ON AWS
WorkerIngressKube:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes secure kubelet port
FromPort: 10250
ToPort: 10250
IpProtocol: tcp
WorkerIngressWorkerKube:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Internal Kubernetes communication
FromPort: 10250
ToPort: 10250
IpProtocol: tcp
WorkerIngressIngressServices:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: tcp
WorkerIngressWorkerIngressServices:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: tcp
MasterIamRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "ec2.amazonaws.com"
Action:
- "sts:AssumeRole"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]]
PolicyDocument:
Version: "2012-10-17"
133
OpenShift Container Platform 4.4 Installing on AWS
Statement:
- Effect: "Allow"
Action: "ec2:*"
Resource: "*"
- Effect: "Allow"
Action: "elasticloadbalancing:*"
Resource: "*"
- Effect: "Allow"
Action: "iam:PassRole"
Resource: "*"
- Effect: "Allow"
Action: "s3:GetObject"
Resource: "*"
MasterInstanceProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Roles:
- Ref: "MasterIamRole"
WorkerIamRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "ec2.amazonaws.com"
Action:
- "sts:AssumeRole"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]]
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action: "ec2:Describe*"
Resource: "*"
WorkerInstanceProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Roles:
- Ref: "WorkerIamRole"
Outputs:
MasterSecurityGroupId:
Description: Master Security Group ID
Value: !GetAtt MasterSecurityGroup.GroupId
WorkerSecurityGroupId:
Description: Worker Security Group ID
Value: !GetAtt WorkerSecurityGroup.GroupId
134
CHAPTER 1. INSTALLING ON AWS
MasterInstanceProfile:
Description: Master IAM Instance Profile
Value: !Ref MasterInstanceProfile
WorkerInstanceProfile:
Description: Worker IAM Instance Profile
Value: !Ref WorkerInstanceProfile
ap-northeast-1 ami-05f59cf6db1d591fe
ap-northeast-2 ami-06a06d31eefbb25c4
ap-south-1 ami-0247a9f45f1917aaa
ap-southeast-1 ami-0b628e07d986a6c36
ap-southeast-2 ami-0bdd5c426d91caf8e
ca-central-1 ami-0c6c7ce738fe5112b
eu-central-1 ami-0a8b58b4be8846e83
eu-north-1 ami-04e659bd9575cea3d
eu-west-1 ami-0d2e5d86e80ef2bd4
eu-west-2 ami-0a27424b3eb592b4d
eu-west-3 ami-0a8cb038a6e583bfa
me-south-1 ami-0c9d86eb9d0acee5d
sa-east-1 ami-0d020f4ea19dbc7fa
us-east-1 ami-0543fbfb4749f3c3b
us-east-2 ami-070c6257b10036038
us-west-1 ami-02b6556210798d665
135
OpenShift Container Platform 4.4 Installing on AWS
us-west-2 ami-0409b2cebfc3ac3d0
NOTE
If you do not use the provided CloudFormation template to create your bootstrap node,
you must review the provided information and manually create the infrastructure. If your
cluster does not initialize correctly, you might have to contact Red Hat support with your
installation logs.
Prerequisites
Procedure
1. Provide a location to serve the bootstrap.ign Ignition config file to your cluster. This file is
located in your installation directory. One way to do this is to create an S3 bucket in your
cluster’s region and upload the Ignition config file to it.
IMPORTANT
The provided CloudFormation Template assumes that the Ignition config files
for your cluster are served from an S3 bucket. If you choose to serve the files
from another location, you must modify the templates.
NOTE
The bootstrap Ignition config file does contain secrets, like X.509 keys. The
following steps provide basic security for the S3 bucket. To provide additional
security, you can enable an S3 bucket policy to allow only certain users, such as
the OpenShift IAM user, to access objects that the bucket contains. You can
avoid S3 entirely and serve your bootstrap Ignition config file from any address
that the bootstrap machine can reach.
136
CHAPTER 1. INSTALLING ON AWS
$ aws s3 mb s3://<cluster-name>-infra 1
$ aws s3 ls s3://<cluster-name>-infra/
2. Create a JSON file that contains the parameter values that the template requires:
[
{
"ParameterKey": "InfrastructureName", 1
"ParameterValue": "mycluster-<random_string>" 2
},
{
"ParameterKey": "RhcosAmi", 3
"ParameterValue": "ami-<random_string>" 4
},
{
"ParameterKey": "AllowedBootstrapSshCidr", 5
"ParameterValue": "0.0.0.0/0" 6
},
{
"ParameterKey": "PublicSubnet", 7
"ParameterValue": "subnet-<random_string>" 8
},
{
"ParameterKey": "MasterSecurityGroupId", 9
"ParameterValue": "sg-<random_string>" 10
},
{
"ParameterKey": "VpcId", 11
"ParameterValue": "vpc-<random_string>" 12
},
{
"ParameterKey": "BootstrapIgnitionLocation", 13
"ParameterValue": "s3://<bucket_name>/bootstrap.ign" 14
},
{
"ParameterKey": "AutoRegisterELB", 15
"ParameterValue": "yes" 16
},
{
"ParameterKey": "RegisterNlbIpTargetsLambdaArn", 17
137
OpenShift Container Platform 4.4 Installing on AWS
"ParameterValue": "arn:aws:lambda:<region>:<account_number>:function:
<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 18
},
{
"ParameterKey": "ExternalApiTargetGroupArn", 19
"ParameterValue": "arn:aws:elasticloadbalancing:<region>:
<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 20
},
{
"ParameterKey": "InternalApiTargetGroupArn", 21
"ParameterValue": "arn:aws:elasticloadbalancing:<region>:
<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 22
},
{
"ParameterKey": "InternalServiceTargetGroupArn", 23
"ParameterValue": "arn:aws:elasticloadbalancing:<region>:
<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 24
}
]
1 The name for your cluster infrastructure that is encoded in your Ignition config files for the
cluster.
2 Specify the infrastructure name that you extracted from the Ignition config file metadata,
which has the format <cluster-name>-<random-string>.
3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node.
7 The public subnet that is associated with your VPC to launch the bootstrap node into.
8 Specify the PublicSubnetIds value from the output of the CloudFormation template for
the VPC.
12 Specify the VpcId value from the output of the CloudFormation template for the VPC.
16 Specify yes or no. If you specify yes, you must provide a Lambda Amazon Resource Name
(ARN) value.
138
CHAPTER 1. INSTALLING ON AWS
3. Copy the template from the CloudFormation template for the bootstrap machinesection of
this topic and save it as a YAML file on your computer. This template describes the bootstrap
machine that your cluster requires.
IMPORTANT
1 <name> is the name for the CloudFormation stack, such as cluster-bootstrap. You need
the name of this stack if you remove the cluster.
2 <template> is the relative path to and name of the CloudFormation template YAML file
that you saved.
3 <parameters> is the relative path to and name of the CloudFormation parameters JSON
file.
After the StackStatus displays CREATE_COMPLETE, the output displays values for the
following parameters. You must provide these parameter values to the other CloudFormation
templates that you run to create your cluster:
139
OpenShift Container Platform 4.4 Installing on AWS
You can use the following CloudFormation template to deploy the bootstrap machine that you need for
your OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM)
Parameters:
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a
maximum of 27 characters.
Description: A short, unique cluster ID used to tag cloud resources and identify items owned or
used by the cluster.
Type: String
RhcosAmi:
Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap.
Type: AWS::EC2::Image::Id
AllowedBootstrapSshCidr:
AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4]
[0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))$
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32.
Default: 0.0.0.0/0
Description: CIDR block to allow SSH access to the bootstrap node.
Type: String
PublicSubnet:
Description: The public subnet to launch the bootstrap node into.
Type: AWS::EC2::Subnet::Id
MasterSecurityGroupId:
Description: The master security group ID for registering temporary rules.
Type: AWS::EC2::SecurityGroup::Id
VpcId:
Description: The VPC-scoped resources will belong to this VPC.
Type: AWS::EC2::VPC::Id
BootstrapIgnitionLocation:
Default: s3://my-s3-bucket/bootstrap.ign
Description: Ignition config file location.
Type: String
AutoRegisterELB:
Default: "yes"
AllowedValues:
140
CHAPTER 1. INSTALLING ON AWS
- "yes"
- "no"
Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter?
Type: String
RegisterNlbIpTargetsLambdaArn:
Description: ARN for NLB IP target registration lambda.
Type: String
ExternalApiTargetGroupArn:
Description: ARN for external API load balancer target group.
Type: String
InternalApiTargetGroupArn:
Description: ARN for internal API load balancer target group.
Type: String
InternalServiceTargetGroupArn:
Description: ARN for internal service load balancer target group.
Type: String
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- InfrastructureName
- Label:
default: "Host Information"
Parameters:
- RhcosAmi
- BootstrapIgnitionLocation
- MasterSecurityGroupId
- Label:
default: "Network Configuration"
Parameters:
- VpcId
- AllowedBootstrapSshCidr
- PublicSubnet
- Label:
default: "Load Balancer Automation"
Parameters:
- AutoRegisterELB
- RegisterNlbIpTargetsLambdaArn
- ExternalApiTargetGroupArn
- InternalApiTargetGroupArn
- InternalServiceTargetGroupArn
ParameterLabels:
InfrastructureName:
default: "Infrastructure Name"
VpcId:
default: "VPC ID"
AllowedBootstrapSshCidr:
default: "Allowed SSH Source"
PublicSubnet:
default: "Public Subnet"
RhcosAmi:
default: "Red Hat Enterprise Linux CoreOS AMI ID"
BootstrapIgnitionLocation:
141
OpenShift Container Platform 4.4 Installing on AWS
Conditions:
DoRegistration: !Equals ["yes", !Ref AutoRegisterELB]
Resources:
BootstrapIamRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "ec2.amazonaws.com"
Action:
- "sts:AssumeRole"
Path: "/"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]]
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action: "ec2:Describe*"
Resource: "*"
- Effect: "Allow"
Action: "ec2:AttachVolume"
Resource: "*"
- Effect: "Allow"
Action: "ec2:DetachVolume"
Resource: "*"
- Effect: "Allow"
Action: "s3:GetObject"
Resource: "*"
BootstrapInstanceProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Path: "/"
Roles:
- Ref: "BootstrapIamRole"
BootstrapSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Cluster Bootstrap Security Group
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 22
ToPort: 22
142
CHAPTER 1. INSTALLING ON AWS
BootstrapInstance:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
IamInstanceProfile: !Ref BootstrapInstanceProfile
InstanceType: "i3.large"
NetworkInterfaces:
- AssociatePublicIpAddress: "true"
DeviceIndex: "0"
GroupSet:
- !Ref "BootstrapSecurityGroup"
- !Ref "MasterSecurityGroupId"
SubnetId: !Ref "PublicSubnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"replace":{"source":"${S3Loc}","verification":{}}},"timeouts":
{},"version":"2.1.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}'
-{
S3Loc: !Ref BootstrapIgnitionLocation
}
RegisterBootstrapApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref ExternalApiTargetGroupArn
TargetIp: !GetAtt BootstrapInstance.PrivateIp
RegisterBootstrapInternalApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalApiTargetGroupArn
TargetIp: !GetAtt BootstrapInstance.PrivateIp
RegisterBootstrapInternalServiceTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalServiceTargetGroupArn
TargetIp: !GetAtt BootstrapInstance.PrivateIp
Outputs:
BootstrapInstanceId:
Description: Bootstrap Instance ID.
Value: !Ref BootstrapInstance
143
OpenShift Container Platform 4.4 Installing on AWS
BootstrapPublicIp:
Description: The bootstrap node public IP address.
Value: !GetAtt BootstrapInstance.PublicIp
BootstrapPrivateIp:
Description: The bootstrap node private IP address.
Value: !GetAtt BootstrapInstance.PrivateIp
NOTE
If you do not use the provided CloudFormation template to create your control plane
nodes, you must review the provided information and manually create the infrastructure.
If your cluster does not initialize correctly, you might have to contact Red Hat support
with your installation logs.
Prerequisites
Procedure
1. Create a JSON file that contains the parameter values that the template requires:
[
{
"ParameterKey": "InfrastructureName", 1
"ParameterValue": "mycluster-<random_string>" 2
},
{
"ParameterKey": "RhcosAmi", 3
"ParameterValue": "ami-<random_string>" 4
},
{
"ParameterKey": "AutoRegisterDNS", 5
"ParameterValue": "yes" 6
},
{
"ParameterKey": "PrivateHostedZoneId", 7
144
CHAPTER 1. INSTALLING ON AWS
"ParameterValue": "<random_string>" 8
},
{
"ParameterKey": "PrivateHostedZoneName", 9
"ParameterValue": "mycluster.example.com" 10
},
{
"ParameterKey": "Master0Subnet", 11
"ParameterValue": "subnet-<random_string>" 12
},
{
"ParameterKey": "Master1Subnet", 13
"ParameterValue": "subnet-<random_string>" 14
},
{
"ParameterKey": "Master2Subnet", 15
"ParameterValue": "subnet-<random_string>" 16
},
{
"ParameterKey": "MasterSecurityGroupId", 17
"ParameterValue": "sg-<random_string>" 18
},
{
"ParameterKey": "IgnitionLocation", 19
"ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master"
20
},
{
"ParameterKey": "CertificateAuthorities", 21
"ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 22
},
{
"ParameterKey": "MasterInstanceProfileName", 23
"ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" 24
},
{
"ParameterKey": "MasterInstanceType", 25
"ParameterValue": "m4.xlarge" 26
},
{
"ParameterKey": "AutoRegisterELB", 27
"ParameterValue": "yes" 28
},
{
"ParameterKey": "RegisterNlbIpTargetsLambdaArn", 29
"ParameterValue": "arn:aws:lambda:<region>:<account_number>:function:
<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 30
},
{
"ParameterKey": "ExternalApiTargetGroupArn", 31
"ParameterValue": "arn:aws:elasticloadbalancing:<region>:
<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 32
},
145
OpenShift Container Platform 4.4 Installing on AWS
{
"ParameterKey": "InternalApiTargetGroupArn", 33
"ParameterValue": "arn:aws:elasticloadbalancing:<region>:
<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 34
},
{
"ParameterKey": "InternalServiceTargetGroupArn", 35
"ParameterValue": "arn:aws:elasticloadbalancing:<region>:
<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 36
}
]
1 The name for your cluster infrastructure that is encoded in your Ignition config files for the
cluster.
2 Specify the infrastructure name that you extracted from the Ignition config file metadata,
which has the format <cluster-name>-<random-string>.
3 CurrentRed Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane
machines.
6 Specify yes or no. If you specify yes, you must provide Hosted Zone information.
8 Specify the PrivateHostedZoneId value from the output of the CloudFormation template
for DNS and load balancing.
12 14 16 Specify a subnet from the PrivateSubnets value from the output of the
CloudFormation template for DNS and load balancing.
22 Specify the value from the master.ign file that is in the installation directory. This value is
the long string with the format data:text/plain;charset=utf-8;base64,ABC…xYz==.
146
CHAPTER 1. INSTALLING ON AWS
25 The type of AWS instance to use for the control plane machines.
26 Allowed values:
m4.xlarge
m4.2xlarge
m4.4xlarge
m4.8xlarge
m4.10xlarge
m4.16xlarge
c4.2xlarge
c4.4xlarge
c4.8xlarge
r4.xlarge
r4.2xlarge
r4.4xlarge
r4.8xlarge
r4.16xlarge
IMPORTANT
If m4 instance types are not available in your region, such as with eu-
west-3, specify an m5 type, such as m5.xlarge, instead.
28 Specify yes or no. If you specify yes, you must provide a Lambda Amazon Resource Name
(ARN) value.
147
OpenShift Container Platform 4.4 Installing on AWS
2. Copy the template from the CloudFormation template for control plane machinessection of
this topic and save it as a YAML file on your computer. This template describes the control plane
machines that your cluster requires.
3. If you specified an m5 instance type as the value for MasterInstanceType, add that instance
type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template.
IMPORTANT
1 <name> is the name for the CloudFormation stack, such as cluster-control-plane. You
need the name of this stack if you remove the cluster.
2 <template> is the relative path to and name of the CloudFormation template YAML file
that you saved.
3 <parameters> is the relative path to and name of the CloudFormation parameters JSON
file.
You can use the following CloudFormation template to deploy the control plane machines that you need
for your OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Node Launch (EC2 master instances)
Parameters:
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
148
CHAPTER 1. INSTALLING ON AWS
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a
maximum of 27 characters.
Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider.
Type: String
RhcosAmi:
Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap.
Type: AWS::EC2::Image::Id
AutoRegisterDNS:
Default: "yes"
AllowedValues:
- "yes"
- "no"
Description: Do you want to invoke DNS etcd registration, which requires Hosted Zone
information?
Type: String
PrivateHostedZoneId:
Description: The Route53 private zone ID to register the etcd targets with, such as
Z21IXYZABCZ2A4.
Type: String
PrivateHostedZoneName:
Description: The Route53 zone to register the targets with, such as cluster.example.com. Omit the
trailing period.
Type: String
Master0Subnet:
Description: The subnets, recommend private, to launch the master nodes into.
Type: AWS::EC2::Subnet::Id
Master1Subnet:
Description: The subnets, recommend private, to launch the master nodes into.
Type: AWS::EC2::Subnet::Id
Master2Subnet:
Description: The subnets, recommend private, to launch the master nodes into.
Type: AWS::EC2::Subnet::Id
MasterSecurityGroupId:
Description: The master security group ID to associate with master nodes.
Type: AWS::EC2::SecurityGroup::Id
IgnitionLocation:
Default: https://api-int.$CLUSTER_NAME.$DOMAIN:22623/config/master
Description: Ignition config file location.
Type: String
CertificateAuthorities:
Default: data:text/plain;charset=utf-8;base64,ABC...xYz==
Description: Base64 encoded certificate authority string to use.
Type: String
MasterInstanceProfileName:
Description: IAM profile to associate with master nodes.
Type: String
MasterInstanceType:
Default: m4.xlarge
Type: String
AllowedValues:
- "m4.xlarge"
- "m4.2xlarge"
- "m4.4xlarge"
- "m4.8xlarge"
- "m4.10xlarge"
149
OpenShift Container Platform 4.4 Installing on AWS
- "m4.16xlarge"
- "c4.2xlarge"
- "c4.4xlarge"
- "c4.8xlarge"
- "r4.xlarge"
- "r4.2xlarge"
- "r4.4xlarge"
- "r4.8xlarge"
- "r4.16xlarge"
AutoRegisterELB:
Default: "yes"
AllowedValues:
- "yes"
- "no"
Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter?
Type: String
RegisterNlbIpTargetsLambdaArn:
Description: ARN for NLB IP target registration lambda. Supply the value from the cluster
infrastructure or select "no" for AutoRegisterELB.
Type: String
ExternalApiTargetGroupArn:
Description: ARN for external API load balancer target group. Supply the value from the cluster
infrastructure or select "no" for AutoRegisterELB.
Type: String
InternalApiTargetGroupArn:
Description: ARN for internal API load balancer target group. Supply the value from the cluster
infrastructure or select "no" for AutoRegisterELB.
Type: String
InternalServiceTargetGroupArn:
Description: ARN for internal service load balancer target group. Supply the value from the cluster
infrastructure or select "no" for AutoRegisterELB.
Type: String
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- InfrastructureName
- Label:
default: "Host Information"
Parameters:
- MasterInstanceType
- RhcosAmi
- IgnitionLocation
- CertificateAuthorities
- MasterSecurityGroupId
- MasterInstanceProfileName
- Label:
default: "Network Configuration"
Parameters:
- VpcId
- AllowedBootstrapSshCidr
- Master0Subnet
- Master1Subnet
150
CHAPTER 1. INSTALLING ON AWS
- Master2Subnet
- Label:
default: "DNS"
Parameters:
- AutoRegisterDNS
- PrivateHostedZoneName
- PrivateHostedZoneId
- Label:
default: "Load Balancer Automation"
Parameters:
- AutoRegisterELB
- RegisterNlbIpTargetsLambdaArn
- ExternalApiTargetGroupArn
- InternalApiTargetGroupArn
- InternalServiceTargetGroupArn
ParameterLabels:
InfrastructureName:
default: "Infrastructure Name"
VpcId:
default: "VPC ID"
Master0Subnet:
default: "Master-0 Subnet"
Master1Subnet:
default: "Master-1 Subnet"
Master2Subnet:
default: "Master-2 Subnet"
MasterInstanceType:
default: "Master Instance Type"
MasterInstanceProfileName:
default: "Master Instance Profile Name"
RhcosAmi:
default: "Red Hat Enterprise Linux CoreOS AMI ID"
BootstrapIgnitionLocation:
default: "Master Ignition Source"
CertificateAuthorities:
default: "Ignition CA String"
MasterSecurityGroupId:
default: "Master Security Group ID"
AutoRegisterDNS:
default: "Use Provided DNS Automation"
AutoRegisterELB:
default: "Use Provided ELB Automation"
PrivateHostedZoneName:
default: "Private Hosted Zone Name"
PrivateHostedZoneId:
default: "Private Hosted Zone ID"
Conditions:
DoRegistration: !Equals ["yes", !Ref AutoRegisterELB]
DoDns: !Equals ["yes", !Ref AutoRegisterDNS]
Resources:
Master0:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
151
OpenShift Container Platform 4.4 Installing on AWS
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeSize: "120"
VolumeType: "gp2"
IamInstanceProfile: !Ref MasterInstanceProfileName
InstanceType: !Ref MasterInstanceType
NetworkInterfaces:
- AssociatePublicIpAddress: "false"
DeviceIndex: "0"
GroupSet:
- !Ref "MasterSecurityGroupId"
SubnetId: !Ref "Master0Subnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"append":[{"source":"${SOURCE}","verification":{}}]},"security":{"tls":
{"certificateAuthorities":[{"source":"${CA_BUNDLE}","verification":{}}]}},"timeouts":
{},"version":"2.2.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}'
-{
SOURCE: !Ref IgnitionLocation,
CA_BUNDLE: !Ref CertificateAuthorities,
}
Tags:
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "shared"
RegisterMaster0:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref ExternalApiTargetGroupArn
TargetIp: !GetAtt Master0.PrivateIp
RegisterMaster0InternalApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalApiTargetGroupArn
TargetIp: !GetAtt Master0.PrivateIp
RegisterMaster0InternalServiceTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalServiceTargetGroupArn
TargetIp: !GetAtt Master0.PrivateIp
Master1:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
BlockDeviceMappings:
- DeviceName: /dev/xvda
152
CHAPTER 1. INSTALLING ON AWS
Ebs:
VolumeSize: "120"
VolumeType: "gp2"
IamInstanceProfile: !Ref MasterInstanceProfileName
InstanceType: !Ref MasterInstanceType
NetworkInterfaces:
- AssociatePublicIpAddress: "false"
DeviceIndex: "0"
GroupSet:
- !Ref "MasterSecurityGroupId"
SubnetId: !Ref "Master1Subnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"append":[{"source":"${SOURCE}","verification":{}}]},"security":{"tls":
{"certificateAuthorities":[{"source":"${CA_BUNDLE}","verification":{}}]}},"timeouts":
{},"version":"2.2.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}'
-{
SOURCE: !Ref IgnitionLocation,
CA_BUNDLE: !Ref CertificateAuthorities,
}
Tags:
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "shared"
RegisterMaster1:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref ExternalApiTargetGroupArn
TargetIp: !GetAtt Master1.PrivateIp
RegisterMaster1InternalApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalApiTargetGroupArn
TargetIp: !GetAtt Master1.PrivateIp
RegisterMaster1InternalServiceTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalServiceTargetGroupArn
TargetIp: !GetAtt Master1.PrivateIp
Master2:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeSize: "120"
153
OpenShift Container Platform 4.4 Installing on AWS
VolumeType: "gp2"
IamInstanceProfile: !Ref MasterInstanceProfileName
InstanceType: !Ref MasterInstanceType
NetworkInterfaces:
- AssociatePublicIpAddress: "false"
DeviceIndex: "0"
GroupSet:
- !Ref "MasterSecurityGroupId"
SubnetId: !Ref "Master2Subnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"append":[{"source":"${SOURCE}","verification":{}}]},"security":{"tls":
{"certificateAuthorities":[{"source":"${CA_BUNDLE}","verification":{}}]}},"timeouts":
{},"version":"2.2.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}'
-{
SOURCE: !Ref IgnitionLocation,
CA_BUNDLE: !Ref CertificateAuthorities,
}
Tags:
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "shared"
RegisterMaster2:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref ExternalApiTargetGroupArn
TargetIp: !GetAtt Master2.PrivateIp
RegisterMaster2InternalApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalApiTargetGroupArn
TargetIp: !GetAtt Master2.PrivateIp
RegisterMaster2InternalServiceTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalServiceTargetGroupArn
TargetIp: !GetAtt Master2.PrivateIp
EtcdSrvRecords:
Condition: DoDns
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: !Ref PrivateHostedZoneId
Name: !Join [".", ["_etcd-server-ssl._tcp", !Ref PrivateHostedZoneName]]
ResourceRecords:
- !Join [
" ",
["0 10 2380", !Join [".", ["etcd-0", !Ref PrivateHostedZoneName]]],
154
CHAPTER 1. INSTALLING ON AWS
]
- !Join [
" ",
["0 10 2380", !Join [".", ["etcd-1", !Ref PrivateHostedZoneName]]],
]
- !Join [
" ",
["0 10 2380", !Join [".", ["etcd-2", !Ref PrivateHostedZoneName]]],
]
TTL: 60
Type: SRV
Etcd0Record:
Condition: DoDns
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: !Ref PrivateHostedZoneId
Name: !Join [".", ["etcd-0", !Ref PrivateHostedZoneName]]
ResourceRecords:
- !GetAtt Master0.PrivateIp
TTL: 60
Type: A
Etcd1Record:
Condition: DoDns
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: !Ref PrivateHostedZoneId
Name: !Join [".", ["etcd-1", !Ref PrivateHostedZoneName]]
ResourceRecords:
- !GetAtt Master1.PrivateIp
TTL: 60
Type: A
Etcd2Record:
Condition: DoDns
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: !Ref PrivateHostedZoneId
Name: !Join [".", ["etcd-2", !Ref PrivateHostedZoneName]]
ResourceRecords:
- !GetAtt Master2.PrivateIp
TTL: 60
Type: A
Outputs:
PrivateIPs:
Description: The control-plane node private IP addresses.
Value:
!Join [
",",
[!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp]
]
155
OpenShift Container Platform 4.4 Installing on AWS
After you create all of the required infrastructure in Amazon Web Services (AWS), you can install the
cluster.
Prerequisites
If you plan to manually manage the worker machines, create the worker machines.
Procedure
1. Change to the directory that contains the installation program and run the following command:
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2 To view different installation details, specify warn, debug, or error instead of info.
If the command exits without a FATAL warning, your production control plane has initialized.
You can create worker nodes in Amazon Web Services (AWS) for your cluster to use. The easiest way to
manually create these nodes is to modify the provided CloudFormation template.
IMPORTANT
The CloudFormation template creates a stack that represents one worker machine. You
must create a stack for each worker machine.
NOTE
If you do not use the provided CloudFormation template to create your worker nodes,
you must review the provided information and manually create the infrastructure. If your
cluster does not initialize correctly, you might have to contact Red Hat support with your
installation logs.
Prerequisites
156
CHAPTER 1. INSTALLING ON AWS
Procedure
1. Create a JSON file that contains the parameter values that the CloudFormation template
requires:
[
{
"ParameterKey": "InfrastructureName", 1
"ParameterValue": "mycluster-<random_string>" 2
},
{
"ParameterKey": "RhcosAmi", 3
"ParameterValue": "ami-<random_string>" 4
},
{
"ParameterKey": "Subnet", 5
"ParameterValue": "subnet-<random_string>" 6
},
{
"ParameterKey": "WorkerSecurityGroupId", 7
"ParameterValue": "sg-<random_string>" 8
},
{
"ParameterKey": "IgnitionLocation", 9
"ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker"
10
},
{
"ParameterKey": "CertificateAuthorities", 11
"ParameterValue": "" 12
},
{
"ParameterKey": "WorkerInstanceProfileName", 13
"ParameterValue": "" 14
},
{
"ParameterKey": "WorkerInstanceType", 15
"ParameterValue": "m4.large" 16
}
]
157
OpenShift Container Platform 4.4 Installing on AWS
1 The name for your cluster infrastructure that is encoded in your Ignition config files for the
cluster.
2 Specify the infrastructure name that you extracted from the Ignition config file metadata,
which has the format <cluster-name>-<random-string>.
3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes.
6 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation
template for DNS and load balancing.
12 Specify the value from the worker.ign file that is in the installation directory. This value is
the long string with the format data:text/plain;charset=utf-8;base64,ABC…xYz==.
15 The type of AWS instance to use for the control plane machines.
16 Allowed values:
m4.large
m4.xlarge
m4.2xlarge
m4.4xlarge
m4.8xlarge
m4.10xlarge
m4.16xlarge
c4.large
c4.xlarge
c4.2xlarge
158
CHAPTER 1. INSTALLING ON AWS
c4.4xlarge
c4.8xlarge
r4.large
r4.xlarge
r4.2xlarge
r4.4xlarge
r4.8xlarge
r4.16xlarge
IMPORTANT
If m4 instance types are not available in your region, such as with eu-
west-3, use m5 types instead.
2. Copy the template from the CloudFormation template for worker machines section of this
topic and save it as a YAML file on your computer. This template describes the networking
objects and load balancers that your cluster requires.
3. If you specified an m5 instance type as the value for WorkerInstanceType, add that instance
type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template.
IMPORTANT
1 <name> is the name for the CloudFormation stack, such as cluster-workers. You
need the name of this stack if you remove the cluster.
2 <template> is the relative path to and name of the CloudFormation template YAML
file that you saved.
159
OpenShift Container Platform 4.4 Installing on AWS
5. Continue to create worker stacks until you have created enough worker Machines for your
cluster.
IMPORTANT
You must create at least two worker machines, so you must create at least two
stacks that use this CloudFormation template.
You can use the following CloudFormation template to deploy the worker machines that you need for
your OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Node Launch (EC2 worker instance)
Parameters:
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a
maximum of 27 characters.
Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider.
Type: String
RhcosAmi:
Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap.
Type: AWS::EC2::Image::Id
Subnet:
Description: The subnets, recommend private, to launch the master nodes into.
Type: AWS::EC2::Subnet::Id
WorkerSecurityGroupId:
Description: The master security group ID to associate with master nodes.
Type: AWS::EC2::SecurityGroup::Id
IgnitionLocation:
Default: https://api-int.$CLUSTER_NAME.$DOMAIN:22623/config/worker
Description: Ignition config file location.
Type: String
CertificateAuthorities:
Default: data:text/plain;charset=utf-8;base64,ABC...xYz==
Description: Base64 encoded certificate authority string to use.
Type: String
WorkerInstanceProfileName:
Description: IAM profile to associate with master nodes.
Type: String
WorkerInstanceType:
Default: m4.large
Type: String
AllowedValues:
- "m4.large"
- "m4.xlarge"
- "m4.2xlarge"
- "m4.4xlarge"
- "m4.8xlarge"
- "m4.10xlarge"
160
CHAPTER 1. INSTALLING ON AWS
- "m4.16xlarge"
- "c4.large"
- "c4.xlarge"
- "c4.2xlarge"
- "c4.4xlarge"
- "c4.8xlarge"
- "r4.large"
- "r4.xlarge"
- "r4.2xlarge"
- "r4.4xlarge"
- "r4.8xlarge"
- "r4.16xlarge"
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- InfrastructureName
- Label:
default: "Host Information"
Parameters:
- WorkerInstanceType
- RhcosAmi
- IgnitionLocation
- CertificateAuthorities
- WorkerSecurityGroupId
- WorkerInstanceProfileName
- Label:
default: "Network Configuration"
Parameters:
- Subnet
ParameterLabels:
Subnet:
default: "Subnet"
InfrastructureName:
default: "Infrastructure Name"
WorkerInstanceType:
default: "Worker Instance Type"
WorkerInstanceProfileName:
default: "Worker Instance Profile Name"
RhcosAmi:
default: "Red Hat Enterprise Linux CoreOS AMI ID"
IgnitionLocation:
default: "Worker Ignition Source"
CertificateAuthorities:
default: "Ignition CA String"
WorkerSecurityGroupId:
default: "Worker Security Group ID"
Resources:
Worker0:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
161
OpenShift Container Platform 4.4 Installing on AWS
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeSize: "120"
VolumeType: "gp2"
IamInstanceProfile: !Ref WorkerInstanceProfileName
InstanceType: !Ref WorkerInstanceType
NetworkInterfaces:
- AssociatePublicIpAddress: "false"
DeviceIndex: "0"
GroupSet:
- !Ref "WorkerSecurityGroupId"
SubnetId: !Ref "Subnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"append":[{"source":"${SOURCE}","verification":{}}]},"security":{"tls":
{"certificateAuthorities":[{"source":"${CA_BUNDLE}","verification":{}}]}},"timeouts":
{},"version":"2.2.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}'
-{
SOURCE: !Ref IgnitionLocation,
CA_BUNDLE: !Ref CertificateAuthorities,
}
Tags:
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "shared"
Outputs:
PrivateIP:
Description: The compute node private IP address.
Value: !GetAtt Worker0.PrivateIp
IMPORTANT
If you installed an earlier version of oc, you cannot use it to complete all of the commands
in OpenShift Container Platform 4.4. Download and install the new version of oc.
Procedure
1. From the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site, navigate
to the page for your installation type and click Download Command-line Tools.
2. Click the folder for your operating system and architecture and click the compressed file.
NOTE
162
CHAPTER 1. INSTALLING ON AWS
$ oc <command>
Prerequisites
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2. Verify you can run oc commands successfully using the exported configuration:
$ oc whoami
system:admin
Prerequisites
Procedure
$ oc get nodes
163
OpenShift Container Platform 4.4 Installing on AWS
2. Review the pending certificate signing requests (CSRs) and ensure that the you see a client and
server request with Pending or Approved status for each machine that you added to the
cluster:
$ oc get csr
In this example, two machines are joining the cluster. You might see more approved CSRs in the
list.
3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending status, approve the CSRs for your cluster machines:
NOTE
Because the CSRs rotate automatically, approve your CSRs within an hour of
adding the machines to the cluster. If you do not approve them within an hour, the
certificates will rotate, and more than two certificates will be present for each
node. You must approve all of these certificates. After you approve the initial
CSRs, the subsequent node client CSRs are automatically approved by the
cluster kube-controller-manager. You must implement a method of
automatically approving the kubelet serving certificate requests.
To approve them individually, run the following command for each valid CSR:
164
CHAPTER 1. INSTALLING ON AWS
Prerequisites
Procedure
Amazon Web Services provides default storage, which means the image-registry Operator is available
after installation. However, if the Registry Operator cannot create an S3 bucket and automatically
configure storage, you must manually configure registry storage.
Instructions for both configuring a PersistentVolume, which is required for production clusters, and for
165
OpenShift Container Platform 4.4 Installing on AWS
Instructions for both configuring a PersistentVolume, which is required for production clusters, and for
configuring an empty directory as the storage location, which is available for only non-production
clusters, are shown.
During installation, your cloud credentials are sufficient to create an S3 bucket and the Registry
Operator will automatically configure storage.
If the Registry Operator cannot create an S3 bucket, and automatically configure storage, you can
create an S3 bucket and configure storage with the following procedure.
Prerequisites
REGISTRY_STORAGE_S3_ACCESSKEY
REGISTRY_STORAGE_S3_SECRETKEY
Procedure
Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically
configure storage.
1. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old.
$ oc edit configs.imageregistry.operator.openshift.io/cluster
storage:
s3:
bucket: <bucket-name>
region: <region-name>
WARNING
To secure your registry images in AWS, block public access to the S3 bucket.
You must configure storage for the image registry Operator. For non-production clusters, you can set
the image registry to an empty directory. If you do so, all images are lost if you restart the registry.
Procedure
166
CHAPTER 1. INSTALLING ON AWS
WARNING
If you run this command before the Image Registry Operator initializes its components, the oc
patch command fails with the following error:
Prerequisites
Procedure
1. Delete the bootstrap resources. If you used the CloudFormation template, delete its stack :
Prerequisites
You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that
uses infrastructure that you provisioned.
Download the AWS CLI and install it on your computer. See Install the AWS CLI Using the
167
OpenShift Container Platform 4.4 Installing on AWS
Download the AWS CLI and install it on your computer. See Install the AWS CLI Using the
Bundled Installer (Linux, macOS, or Unix).
Procedure
To create specific records, you must create a record for each route that your cluster uses, as
shown in the output of the following command:
2. Retrieve the Ingress Operator load balancer status and note the value of the external IP address
that it uses, which is shown in the EXTERNAL-IP column:
Z3AADJGX6KTTL2
1 For <external_ip>, specify the value of the external IP address of the Ingress Operator
load balancer that you obtained.
The output of this command is the load balancer hosted zone ID.
/hostedzone/Z3URY6TWQ91KVV
168
CHAPTER 1. INSTALLING ON AWS
1 2 For <domain_name>, specify the Route53 base domain for your OpenShift Container
Platform cluster.
The public hosted zone ID for your domain is shown in the command output. In this example, it is
Z3URY6TWQ91KVV.
1 For <private_hosted_zone_id>, specify the value from the output of the CloudFormation
template for DNS and load balancing.
2 For <cluster_domain>, specify the domain or subdomain that you use with your
OpenShift Container Platform cluster.
3 For <hosted_zone_id>, specify the public hosted zone ID for the load balancer that you
obtained.
4 For <external_ip>, specify the value of the external IP address of the Ingress Operator
load balancer. Ensure that you include the trailing period (.) in this parameter value.
169
OpenShift Container Platform 4.4 Installing on AWS
> }
> }
> ]
> }'
1 For <public_hosted_zone_id>, specify the public hosted zone for your domain.
2 For <cluster_domain>, specify the domain or subdomain that you use with your
OpenShift Container Platform cluster.
3 For <hosted_zone_id>, specify the public hosted zone ID for the load balancer that you
obtained.
4 For <external_ip>, specify the value of the external IP address of the Ingress Operator
load balancer. Ensure that you include the trailing period (.) in this parameter value.
Prerequisites
Removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned
AWS infrastructure.
Procedure
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
IMPORTANT
The Ignition config files that the installation program generates contain
certificates that expire after 24 hours. You must keep the cluster running for 24
hours in a non-degraded state to ensure that the first certificate rotation has
finished.
Next steps
IMPORTANT
While you can install an OpenShift Container Platform cluster by using mirrored
installation release content, your cluster still requires internet access to use the AWS
APIs.
One way to create this infrastructure is to use the provided CloudFormation templates. You can modify
the templates to customize your infrastructure or use the information that they contain to create AWS
objects according to your company’s policies.
Prerequisites
Create a mirror registry on your bastion host and obtain the imageContentSources data for
your version of OpenShift Container Platform.
IMPORTANT
Because the installation media is on the bastion host, use that computer to
complete all installation steps.
Review details about the OpenShift Container Platform installation and update processes.
IMPORTANT
If you have an AWS profile stored on your computer, it must not use a temporary
session token that you generated while using a multi-factor authentication
device. The cluster continues to use your current AWS credentials to create AWS
resources for the entire life of the cluster, so you must use key-based, long-lived
credentials. To generate appropriate keys, see Managing Access Keys for IAM
Users in the AWS documentation. You can supply the keys when you run the
installation program.
Download the AWS CLI and install it on your computer. See Install the AWS CLI Using the
Bundled Installer (Linux, macOS, or Unix) in the AWS documentation.
If you use a firewall and plan to use telemetry, you must configure the firewall to allow the sites
that your cluster requires access to.
NOTE
Be sure to also review this site list if you are configuring a proxy.
In OpenShift Container Platform 4.4, you can perform an installation that does not require an active
171
OpenShift Container Platform 4.4 Installing on AWS
In OpenShift Container Platform 4.4, you can perform an installation that does not require an active
connection to the internet to obtain software components. You complete an installation in a restricted
network on only infrastructure that you provision, not infrastructure that the installation program
provisions, so your platform selection is limited.
If you choose to perform a restricted network installation on a cloud platform, you still require access to
its cloud APIs. Some cloud functions, like Amazon Web Service’s IAM service, require internet access, so
you might still require internet access. Depending on your network, you might require less internet
access for an installation on bare metal hardware or on VMware vSphere.
To complete a restricted network installation, you must create a registry that mirrors the contents of the
OpenShift Container Platform registry and contains the installation media. You can create this mirror on
a bastion host, which can access both the internet and your closed network, or by using other methods
that meet your restrictions.
IMPORTANT
Clusters in restricted networks have the following additional limitations and restrictions:
By default, you cannot use the contents of the Developer Catalog because you cannot access
the required ImageStreamTags.
Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management and entitlement. If the cluster has internet access and you
do not disable Telemetry, that service automatically entitles your cluster. If the Telemetry
service cannot entitle your cluster, you must manually entitle it on the Cluster registration page.
Access Quay.io to obtain the packages that are required to install your cluster.
IMPORTANT
172
CHAPTER 1. INSTALLING ON AWS
IMPORTANT
If your cluster cannot have direct internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
internet access. Before you update the cluster, you update the content of the mirror
registry.
For more information about the integration testing for different platforms, see the OpenShift Container
Platform 4.x Tested Integrations page.
You can use the provided CloudFormation templates to create this infrastructure, you can manually
create the components, or you can reuse existing infrastructure that meets the cluster requirements.
Review the CloudFormation templates for more details about how the components interrelate.
A bootstrap machine. This machine is required during installation, but you can remove it after
your cluster deploys.
At least three control plane machines. The control plane machines are not governed by a
MachineSet.
Compute machines. You must create at least two compute machines, which are also known as
worker machines, during installation. These machines are not governed by a MachineSet.
You can use the following instance types for the cluster machines with the provided CloudFormation
templates.
IMPORTANT
If m4 instance types are not available in your region, such as with eu-west-3, use m5
types instead.
i3.large x
m4.large or m5.large x
m4.xlarge or x x
m5.xlarge
173
OpenShift Container Platform 4.4 Installing on AWS
m4.2xlarge x x
m4.4xlarge x x
m4.8xlarge x x
m4.10xlarge x x
m4.16xlarge x x
c4.large x
c4.xlarge x
c4.2xlarge x x
c4.4xlarge x x
c4.8xlarge x x
r4.large x
r4.xlarge x x
r4.2xlarge x x
r4.4xlarge x x
r4.8xlarge x x
r4.16xlarge x x
You might be able to use other instance types that meet the specifications of these instance types.
Because your cluster has limited access to automatic machine management when you use infrastructure
that you provision, you must provide a mechanism for approving cluster certificate signing requests
(CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The
machine-approver cannot guarantee the validity of a serving certificate that is requested by using
kubelet credentials because it cannot confirm that the correct machine issued the request. You must
determine and implement a method of verifying the validity of the kubelet serving certificate requests
and approving them.
174
CHAPTER 1. INSTALLING ON AWS
A VPC
DNS entries
Security groups
IAM roles
S3 buckets
AWS::EC2::NatGateway
AWS::EC2::EIP
175
OpenShift Container Platform 4.4 Installing on AWS
80 Inbound HTTP
traffic
22 Inbound SSH
traffic
0 - 65535 Outbound
ephemeral traffic
The cluster also requires load balancers and listeners for port 6443, which are required for the
Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for
new machines. The targets will be the master nodes. Port 6443 must be accessible to both clients
external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the
cluster.
etcd record AWS::Route The registration records for etcd for your control plane machines.
sets 53::RecordS
et
176
CHAPTER 1. INSTALLING ON AWS
Public load AWS::Elastic The load balancer for your public subnets.
balancer LoadBalanci
ngV2::LoadB
alancer
External API AWS::Route Alias records for the external API server.
server record 53::RecordS
etGroup
External AWS::Elastic A listener on port 6443 for the external load balancer.
listener LoadBalanci
ngV2::Listen
er
External target AWS::Elastic The target group for the external load balancer.
group LoadBalanci
ngV2::Target
Group
Private load AWS::Elastic The load balancer for your private subnets.
balancer LoadBalanci
ngV2::LoadB
alancer
Internal API AWS::Route Alias records for the internal API server.
server record 53::RecordS
etGroup
Internal listener AWS::Elastic A listener on port 22623 for the internal load balancer.
LoadBalanci
ngV2::Listen
er
Internal target AWS::Elastic The target group for the Internal load balancer.
group LoadBalanci
ngV2::Target
Group
Internal listener AWS::Elastic A listener on port 6443 for the internal load balancer.
LoadBalanci
ngV2::Listen
er
Internal target AWS::Elastic The target group for the internal load balancer.
group LoadBalanci
ngV2::Target
Group
177
OpenShift Container Platform 4.4 Installing on AWS
Security groups
The control plane and worker machines require access to the following ports:
tcp 6443
tcp 22623
178
CHAPTER 1. INSTALLING ON AWS
Worker Ingress
The worker machines require the following Ingress groups. Each Ingress group is a
AWS::EC2::SecurityGroupIngress resource.
179
OpenShift Container Platform 4.4 Installing on AWS
You must grant the machines permissions in AWS. The provided CloudFormation templates grant the
machines permission the following AWS::IAM::Role objects and provide a AWS::IAM::InstanceProfile
for each set of roles. If you do not use the templates, you can grant the machines the following broad
permissions or the following individual permissions.
Allow elasticloadbalancing *
:*
Allow iam:PassRole *
Allow s3:GetObject *
Allow ec2:AttachVolume *
Allow ec2:DetachVolume *
When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web
Services (AWS), you grant that user all of the required permissions. To deploy all components of an
OpenShift Container Platform cluster, the IAM user requires the following permissions:
ec2:AllocateAddress
ec2:AssociateAddress
ec2:AuthorizeSecurityGroupEgress
ec2:AuthorizeSecurityGroupIngress
ec2:CopyImage
ec2:CreateNetworkInterface
ec2:AttachNetworkInterface
ec2:CreateSecurityGroup
ec2:CreateTags
ec2:CreateVolume
180
CHAPTER 1. INSTALLING ON AWS
ec2:DeleteSecurityGroup
ec2:DeleteSnapshot
ec2:DeregisterImage
ec2:DescribeAccountAttributes
ec2:DescribeAddresses
ec2:DescribeAvailabilityZones
ec2:DescribeDhcpOptions
ec2:DescribeImages
ec2:DescribeInstanceAttribute
ec2:DescribeInstanceCreditSpecifications
ec2:DescribeInstances
ec2:DescribeInternetGateways
ec2:DescribeKeyPairs
ec2:DescribeNatGateways
ec2:DescribeNetworkAcls
ec2:DescribeNetworkInterfaces
ec2:DescribePrefixLists
ec2:DescribeRegions
ec2:DescribeRouteTables
ec2:DescribeSecurityGroups
ec2:DescribeSubnets
ec2:DescribeTags
ec2:DescribeVolumes
ec2:DescribeVpcAttribute
ec2:DescribeVpcClassicLink
ec2:DescribeVpcClassicLinkDnsSupport
ec2:DescribeVpcEndpoints
ec2:DescribeVpcs
ec2:ModifyInstanceAttribute
181
OpenShift Container Platform 4.4 Installing on AWS
ec2:ModifyNetworkInterfaceAttribute
ec2:ReleaseAddress
ec2:RevokeSecurityGroupEgress
ec2:RevokeSecurityGroupIngress
ec2:RunInstances
ec2:TerminateInstances
ec2:AssociateDhcpOptions
ec2:AssociateRouteTable
ec2:AttachInternetGateway
ec2:CreateDhcpOptions
ec2:CreateInternetGateway
ec2:CreateNatGateway
ec2:CreateRoute
ec2:CreateRouteTable
ec2:CreateSubnet
ec2:CreateVpc
ec2:CreateVpcEndpoint
ec2:ModifySubnetAttribute
ec2:ModifyVpcAttribute
NOTE
If you use an existing VPC, your account does not require these permissions for creating
network resources.
elasticloadbalancing:AddTags
elasticloadbalancing:ApplySecurityGroupsToLoadBalancer
elasticloadbalancing:AttachLoadBalancerToSubnets
elasticloadbalancing:ConfigureHealthCheck
elasticloadbalancing:CreateListener
182
CHAPTER 1. INSTALLING ON AWS
elasticloadbalancing:CreateLoadBalancer
elasticloadbalancing:CreateLoadBalancerListeners
elasticloadbalancing:CreateTargetGroup
elasticloadbalancing:DeleteLoadBalancer
elasticloadbalancing:DeregisterInstancesFromLoadBalancer
elasticloadbalancing:DeregisterTargets
elasticloadbalancing:DescribeInstanceHealth
elasticloadbalancing:DescribeListeners
elasticloadbalancing:DescribeLoadBalancerAttributes
elasticloadbalancing:DescribeLoadBalancers
elasticloadbalancing:DescribeTags
elasticloadbalancing:DescribeTargetGroupAttributes
elasticloadbalancing:DescribeTargetHealth
elasticloadbalancing:ModifyLoadBalancerAttributes
elasticloadbalancing:ModifyTargetGroup
elasticloadbalancing:ModifyTargetGroupAttributes
elasticloadbalancing:RegisterInstancesWithLoadBalancer
elasticloadbalancing:RegisterTargets
elasticloadbalancing:SetLoadBalancerPoliciesOfListener
iam:AddRoleToInstanceProfile
iam:CreateInstanceProfile
iam:CreateRole
iam:DeleteInstanceProfile
iam:DeleteRole
iam:DeleteRolePolicy
iam:GetInstanceProfile
iam:GetRole
iam:GetRolePolicy
183
OpenShift Container Platform 4.4 Installing on AWS
iam:GetUser
iam:ListInstanceProfilesForRole
iam:ListRoles
iam:ListUsers
iam:PassRole
iam:PutRolePolicy
iam:RemoveRoleFromInstanceProfile
iam:SimulatePrincipalPolicy
iam:TagRole
route53:ChangeResourceRecordSets
route53:ChangeTagsForResource
route53:CreateHostedZone
route53:DeleteHostedZone
route53:GetChange
route53:GetHostedZone
route53:ListHostedZones
route53:ListHostedZonesByName
route53:ListResourceRecordSets
route53:ListTagsForResource
route53:UpdateHostedZoneComment
s3:CreateBucket
s3:DeleteBucket
s3:GetAccelerateConfiguration
s3:GetBucketCors
s3:GetBucketLocation
s3:GetBucketLogging
s3:GetBucketObjectLockConfiguration
184
CHAPTER 1. INSTALLING ON AWS
s3:GetBucketReplication
s3:GetBucketRequestPayment
s3:GetBucketTagging
s3:GetBucketVersioning
s3:GetBucketWebsite
s3:GetEncryptionConfiguration
s3:GetLifecycleConfiguration
s3:GetReplicationConfiguration
s3:ListBucket
s3:PutBucketAcl
s3:PutBucketTagging
s3:PutEncryptionConfiguration
s3:DeleteObject
s3:GetObject
s3:GetObjectAcl
s3:GetObjectTagging
s3:GetObjectVersion
s3:PutObject
s3:PutObjectAcl
s3:PutObjectTagging
autoscaling:DescribeAutoScalingGroups
ec2:DeleteNetworkInterface
ec2:DeleteVolume
elasticloadbalancing:DeleteTargetGroup
elasticloadbalancing:DescribeTargetGroups
iam:ListInstanceProfiles
iam:ListRolePolicies
185
OpenShift Container Platform 4.4 Installing on AWS
iam:ListUserPolicies
s3:DeleteObject
tag:GetResources
ec2:DeleteDhcpOptions
ec2:DeleteInternetGateway
ec2:DeleteNatGateway
ec2:DeleteRoute
ec2:DeleteRouteTable
ec2:DeleteSubnet
ec2:DeleteVpc
ec2:DeleteVpcEndpoints
ec2:DetachInternetGateway
ec2:DisassociateRouteTable
ec2:ReplaceRouteTableAssociation
NOTE
If you use an existing VPC, your account does not require these permissions to delete
network resources.
iam:CreateAccessKey
iam:CreateUser
iam:DeleteAccessKey
iam:DeleteUser
iam:DeleteUserPolicy
iam:GetUserPolicy
iam:ListAccessKeys
iam:PutUserPolicy
iam:TagUser
iam:GetUserPolicy
186
CHAPTER 1. INSTALLING ON AWS
iam:ListAccessKeys
s3:PutBucketPublicAccessBlock
s3:GetBucketPublicAccessBlock
s3:PutLifecycleConfiguration
s3:HeadBucket
s3:ListBucketMultipartUploads
s3:AbortMultipartUpload
NOTE
You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.
NOTE
You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.
Procedure
1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:
1 Specify the path and file name, such as ~/.ssh/id_rsa, of the SSH key.
Running this command generates an SSH key that does not require a password in the location
that you specified.
IMPORTANT
If you create a new SSH key pair, avoid overwriting existing SSH keys.
187
OpenShift Container Platform 4.4 Installing on AWS
$ ssh-add <path>/<file_name> 1
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installation
program. If you install a cluster on infrastructure that you provision, you must provide this key to
your cluster’s machines.
Generate and customize the installation configuration file that the installation program needs to deploy
your cluster.
Prerequisites
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster. For a restricted network installation, these files are on your bastion host.
Procedure
1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.
IMPORTANT
188
CHAPTER 1. INSTALLING ON AWS
IMPORTANT
NOTE
iii. If you do not have an AWS profile stored on your computer, enter the AWS access key
ID and secret access key for the user that you configured to run the installation
program.
v. Select the base domain for the Route53 service that you configured for your cluster.
vii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.
2. Edit the install-config.yaml file to set the number of compute replicas, which are also known as
worker replicas, to 0, as shown in the following compute stanza:
compute:
- hyperthreading: Enabled
name: worker
platform: {}
replicas: 0
3. Edit the install-config.yaml file to provide the additional information that is required for an
installation in a restricted network.
a. Update the pullSecret value to contain the authentication information for your registry:
For bastion_host_name, specify the registry domain name that you specified in the
certificate for your mirror registry, and for <credentials>, specify the base64-encoded user
name and password for your mirror registry.
189
OpenShift Container Platform 4.4 Installing on AWS
b. Add the additionalTrustBundle parameter and value. The value must be the contents of
the certificate file that you used for your mirror registry, which can be an exiting, trusted
certificate authority or the self-signed certificate that you generated for the mirror registry.
additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
-----END CERTIFICATE-----
imageContentSources:
- mirrors:
- <bastion_host_name>:5000/<repo_name>/release
source: quay.io/openshift-release-dev/ocp-release
- mirrors:
- <bastion_host_name>:5000/<repo_name>/release
source: registry.svc.ci.openshift.org/ocp/release
Use the imageContentSources section from the output of the command to mirror the
repository.
IMPORTANT
Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS
proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by
configuring the proxy settings in the install-config.yaml file.
Prerequisites
Review the sites that your cluster requires access to and determine whether any need to bypass
the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider
APIs. Add sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.
NOTE
Procedure
1. Edit your install-config.yaml file and add the proxy settings. For example:
190
CHAPTER 1. INSTALLING ON AWS
apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> 1
httpsProxy: http://<username>:<pswd>@<ip>:<port> 2
noProxy: example.com 3
additionalTrustBundle: | 4
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
...
1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme
must be http. If you use an MITM transparent proxy network that does not require
additional proxy configuration but requires additional CAs, you must not specify an
httpProxy value.
2 A proxy URL to use for creating HTTPS connections outside the cluster. If this field is not
specified, then httpProxy is used for both HTTP and HTTPS connections. The URL
scheme must be http; https is currently not supported. If you use an MITM transparent
proxy network that does not require additional proxy configuration but requires additional
CAs, you must not specify an httpsProxy value.
NOTE
The installation program does not support the proxy readinessEndpoints field.
2. Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings
in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still
created, but it will have a nil spec.
NOTE
Only the Proxy object named cluster is supported, and no additional proxies can be
created.
191
OpenShift Container Platform 4.4 Installing on AWS
Because you must modify some cluster definition files and manually start the cluster machines, you must
generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines.
IMPORTANT
The Ignition config files that the installation program generates contain certificates that
expire after 24 hours. You must complete your cluster installation and keep the cluster
running for 24 hours in a non-degraded state to ensure that the first certificate rotation
has finished.
Prerequisites
Obtain the OpenShift Container Platform installation program. For a restricted network
installation, these files are on your bastion host.
Procedure
1 For <installation_directory>, specify the installation directory that contains the install-
config.yaml file you created.
Because you create your own compute machines later in the installation process, you can safely
ignore this warning.
2. Remove the Kubernetes manifest files that define the control plane machines:
$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml
By removing these files, you prevent the cluster from automatically generating control plane
machines.
3. Remove the Kubernetes manifest files that define the worker machines:
$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml
Because you create and manage the worker machines yourself, you do not need to initialize
these machines.
192
CHAPTER 1. INSTALLING ON AWS
NOTE
5. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove
the privateZone and publicZone sections from the
<installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file:
apiVersion: config.openshift.io/v1
kind: DNS
metadata:
creationTimestamp: null
name: cluster
spec:
baseDomain: example.openshift.com
privateZone: 1
id: mycluster-100419-private-zone
publicZone: 2
id: example.openshift.com
status: {}
If you do so, you must add ingress DNS records manually in a later step.
.
├── auth
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign
Prerequisites
193
OpenShift Container Platform 4.4 Installing on AWS
Prerequisites
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
To extract and view the infrastructure name from the Ignition config file metadata, run the
following command:
$ jq -r .infraID /<installation_directory>/metadata.json 1
openshift-vw9j6 2
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2 The output of this command is your cluster name and a random string.
NOTE
If you do not use the provided CloudFormation template to create your AWS
infrastructure, you must review the provided information and manually create the
infrastructure. If your cluster does not initialize correctly, you might have to contact Red
Hat support with your installation logs.
Prerequisites
Procedure
1. Create a JSON file that contains the parameter values that the template requires:
[
{
"ParameterKey": "VpcCidr", 1
"ParameterValue": "10.0.0.0/16" 2
},
{
"ParameterKey": "AvailabilityZoneCount", 3
"ParameterValue": "1" 4
194
CHAPTER 1. INSTALLING ON AWS
},
{
"ParameterKey": "SubnetBits", 5
"ParameterValue": "12" 6
}
]
2. Copy the template from the CloudFormation template for the VPCsection of this topic and
save it as a YAML file on your computer. This template describes the VPC that your cluster
requires.
IMPORTANT
1 <name> is the name for the CloudFormation stack, such as cluster-vpc. You need the
name of this stack if you remove the cluster.
2 <template> is the relative path to and name of the CloudFormation template YAML file
that you saved.
3 <parameters> is the relative path to and name of the CloudFormation parameters JSON
file.
After the StackStatus displays CREATE_COMPLETE, the output displays values for the
following parameters. You must provide these parameter values to the other CloudFormation
templates that you run to create your cluster:
195
OpenShift Container Platform 4.4 Installing on AWS
You can use the following CloudFormation template to deploy the VPC that you need for your
OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09
Description: Template for Best Practice VPC with 1-3 AZs
Parameters:
VpcCidr:
AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4]
[0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24.
Default: 10.0.0.0/16
Description: CIDR block for VPC.
Type: String
AvailabilityZoneCount:
ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)"
MinValue: 1
MaxValue: 3
Default: 1
Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)"
Type: Number
SubnetBits:
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27.
MinValue: 5
MaxValue: 13
Default: 12
Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 =
/19)"
Type: Number
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Network Configuration"
Parameters:
- VpcCidr
- SubnetBits
- Label:
default: "Availability Zones"
Parameters:
- AvailabilityZoneCount
ParameterLabels:
AvailabilityZoneCount:
default: "Availability Zone Count"
196
CHAPTER 1. INSTALLING ON AWS
VpcCidr:
default: "VPC CIDR"
SubnetBits:
default: "Bits Per Subnet"
Conditions:
DoAz3: !Equals [3, !Ref AvailabilityZoneCount]
DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3]
Resources:
VPC:
Type: "AWS::EC2::VPC"
Properties:
EnableDnsSupport: "true"
EnableDnsHostnames: "true"
CidrBlock: !Ref VpcCidr
PublicSubnet:
Type: "AWS::EC2::Subnet"
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
-0
- Fn::GetAZs: !Ref "AWS::Region"
PublicSubnet2:
Type: "AWS::EC2::Subnet"
Condition: DoAz2
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
-1
- Fn::GetAZs: !Ref "AWS::Region"
PublicSubnet3:
Type: "AWS::EC2::Subnet"
Condition: DoAz3
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
-2
- Fn::GetAZs: !Ref "AWS::Region"
InternetGateway:
Type: "AWS::EC2::InternetGateway"
GatewayToInternet:
Type: "AWS::EC2::VPCGatewayAttachment"
Properties:
VpcId: !Ref VPC
InternetGatewayId: !Ref InternetGateway
PublicRouteTable:
Type: "AWS::EC2::RouteTable"
Properties:
VpcId: !Ref VPC
PublicRoute:
Type: "AWS::EC2::Route"
DependsOn: GatewayToInternet
197
OpenShift Container Platform 4.4 Installing on AWS
Properties:
RouteTableId: !Ref PublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
PublicSubnetRouteTableAssociation:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PublicSubnet
RouteTableId: !Ref PublicRouteTable
PublicSubnetRouteTableAssociation2:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Condition: DoAz2
Properties:
SubnetId: !Ref PublicSubnet2
RouteTableId: !Ref PublicRouteTable
PublicSubnetRouteTableAssociation3:
Condition: DoAz3
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PublicSubnet3
RouteTableId: !Ref PublicRouteTable
PrivateSubnet:
Type: "AWS::EC2::Subnet"
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
-0
- Fn::GetAZs: !Ref "AWS::Region"
PrivateRouteTable:
Type: "AWS::EC2::RouteTable"
Properties:
VpcId: !Ref VPC
PrivateSubnetRouteTableAssociation:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PrivateSubnet
RouteTableId: !Ref PrivateRouteTable
NAT:
DependsOn:
- GatewayToInternet
Type: "AWS::EC2::NatGateway"
Properties:
AllocationId:
"Fn::GetAtt":
- EIP
- AllocationId
SubnetId: !Ref PublicSubnet
EIP:
Type: "AWS::EC2::EIP"
Properties:
Domain: vpc
Route:
Type: "AWS::EC2::Route"
Properties:
RouteTableId:
198
CHAPTER 1. INSTALLING ON AWS
Ref: PrivateRouteTable
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NAT
PrivateSubnet2:
Type: "AWS::EC2::Subnet"
Condition: DoAz2
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
-1
- Fn::GetAZs: !Ref "AWS::Region"
PrivateRouteTable2:
Type: "AWS::EC2::RouteTable"
Condition: DoAz2
Properties:
VpcId: !Ref VPC
PrivateSubnetRouteTableAssociation2:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Condition: DoAz2
Properties:
SubnetId: !Ref PrivateSubnet2
RouteTableId: !Ref PrivateRouteTable2
NAT2:
DependsOn:
- GatewayToInternet
Type: "AWS::EC2::NatGateway"
Condition: DoAz2
Properties:
AllocationId:
"Fn::GetAtt":
- EIP2
- AllocationId
SubnetId: !Ref PublicSubnet2
EIP2:
Type: "AWS::EC2::EIP"
Condition: DoAz2
Properties:
Domain: vpc
Route2:
Type: "AWS::EC2::Route"
Condition: DoAz2
Properties:
RouteTableId:
Ref: PrivateRouteTable2
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NAT2
PrivateSubnet3:
Type: "AWS::EC2::Subnet"
Condition: DoAz3
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
199
OpenShift Container Platform 4.4 Installing on AWS
-2
- Fn::GetAZs: !Ref "AWS::Region"
PrivateRouteTable3:
Type: "AWS::EC2::RouteTable"
Condition: DoAz3
Properties:
VpcId: !Ref VPC
PrivateSubnetRouteTableAssociation3:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Condition: DoAz3
Properties:
SubnetId: !Ref PrivateSubnet3
RouteTableId: !Ref PrivateRouteTable3
NAT3:
DependsOn:
- GatewayToInternet
Type: "AWS::EC2::NatGateway"
Condition: DoAz3
Properties:
AllocationId:
"Fn::GetAtt":
- EIP3
- AllocationId
SubnetId: !Ref PublicSubnet3
EIP3:
Type: "AWS::EC2::EIP"
Condition: DoAz3
Properties:
Domain: vpc
Route3:
Type: "AWS::EC2::Route"
Condition: DoAz3
Properties:
RouteTableId:
Ref: PrivateRouteTable3
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NAT3
S3Endpoint:
Type: AWS::EC2::VPCEndpoint
Properties:
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal: '*'
Action:
- '*'
Resource:
- '*'
RouteTableIds:
- !Ref PublicRouteTable
- !Ref PrivateRouteTable
- !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"]
- !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"]
ServiceName: !Join
200
CHAPTER 1. INSTALLING ON AWS
- ''
- - com.amazonaws.
- !Ref 'AWS::Region'
- .s3
VpcId: !Ref VPC
Outputs:
VpcId:
Description: ID of the new VPC.
Value: !Ref VPC
PublicSubnetIds:
Description: Subnet IDs of the public subnets.
Value:
!Join [
",",
[!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref
PublicSubnet3, !Ref "AWS::NoValue"]]
]
PrivateSubnetIds:
Description: Subnet IDs of the private subnets.
Value:
!Join [
",",
[!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref
PrivateSubnet3, !Ref "AWS::NoValue"]]
]
You can run the template multiple times within a single VPC.
NOTE
If you do not use the provided CloudFormation template to create your AWS
infrastructure, you must review the provided information and manually create the
infrastructure. If your cluster does not initialize correctly, you might have to contact Red
Hat support with your installation logs.
Prerequisites
Procedure
1. Obtain the Hosted Zone ID for the Route53 zone that you specified in the install-config.yaml
file for your cluster. You can obtain this ID from the AWS console or by running the following
command:
IMPORTANT
201
OpenShift Container Platform 4.4 Installing on AWS
IMPORTANT
1 For the <route53_domain>, specify the Route53 base domain that you used when you
generated the install-config.yaml file for the cluster.
2. Create a JSON file that contains the parameter values that the template requires:
[
{
"ParameterKey": "ClusterName", 1
"ParameterValue": "mycluster" 2
},
{
"ParameterKey": "InfrastructureName", 3
"ParameterValue": "mycluster-<random_string>" 4
},
{
"ParameterKey": "HostedZoneId", 5
"ParameterValue": "<random_string>" 6
},
{
"ParameterKey": "HostedZoneName", 7
"ParameterValue": "example.com" 8
},
{
"ParameterKey": "PublicSubnets", 9
"ParameterValue": "subnet-<random_string>" 10
},
{
"ParameterKey": "PrivateSubnets", 11
"ParameterValue": "subnet-<random_string>" 12
},
{
"ParameterKey": "VpcId", 13
"ParameterValue": "vpc-<random_string>" 14
}
]
2 Specify the cluster name that you used when you generated the install-config.yaml file
for the cluster.
3 The name for your cluster infrastructure that is encoded in your Ignition config files for the
cluster.
202
CHAPTER 1. INSTALLING ON AWS
4 Specify the infrastructure name that you extracted from the Ignition config file metadata,
which has the format <cluster-name>-<random-string>.
6 Specify the Route53 public zone ID, which as a format similar to Z21IXYZABCZ2A4. You
can obtain this value from the AWS console.
8 Specify the Route53 base domain that you used when you generated the install-
config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the
AWS console.
10 Specify the PublicSubnetIds value from the output of the CloudFormation template for
the VPC.
12 Specify the PrivateSubnetIds value from the output of the CloudFormation template for
the VPC.
14 Specify the VpcId value from the output of the CloudFormation template for the VPC.
3. Copy the template from the CloudFormation template for the network and load balancers
section of this topic and save it as a YAML file on your computer. This template describes the
networking and load balancing objects that your cluster requires.
IMPORTANT
1 <name> is the name for the CloudFormation stack, such as cluster-dns. You need the
name of this stack if you remove the cluster.
2 <template> is the relative path to and name of the CloudFormation template YAML file
that you saved.
3 <parameters> is the relative path to and name of the CloudFormation parameters JSON
file.
203
OpenShift Container Platform 4.4 Installing on AWS
After the StackStatus displays CREATE_COMPLETE, the output displays values for the
following parameters. You must provide these parameter values to the other CloudFormation
templates that you run to create your cluster:
RegisterN Lambda ARN useful to help register/deregister IP targets for these load balancers.
lbIpTarget
sLambda
You can use the following CloudFormation template to deploy the networking objects and load
balancers that you need for your OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Network Elements (Route53 & LBs)
Parameters:
ClusterName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
204
CHAPTER 1. INSTALLING ON AWS
ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a
maximum of 27 characters.
Description: A short, representative cluster name to use for host names and other identifying
names.
Type: String
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a
maximum of 27 characters.
Description: A short, unique cluster ID used to tag cloud resources and identify items owned or
used by the cluster.
Type: String
HostedZoneId:
Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4.
Type: String
HostedZoneName:
Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing
period.
Type: String
Default: "example.com"
PublicSubnets:
Description: The internet-facing subnets.
Type: List<AWS::EC2::Subnet::Id>
PrivateSubnets:
Description: The internal subnets.
Type: List<AWS::EC2::Subnet::Id>
VpcId:
Description: The VPC-scoped resources will belong to this VPC.
Type: AWS::EC2::VPC::Id
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- ClusterName
- InfrastructureName
- Label:
default: "Network Configuration"
Parameters:
- VpcId
- PublicSubnets
- PrivateSubnets
- Label:
default: "DNS"
Parameters:
- HostedZoneName
- HostedZoneId
ParameterLabels:
ClusterName:
default: "Cluster Name"
InfrastructureName:
default: "Infrastructure Name"
205
OpenShift Container Platform 4.4 Installing on AWS
VpcId:
default: "VPC ID"
PublicSubnets:
default: "Public Subnets"
PrivateSubnets:
default: "Private Subnets"
HostedZoneName:
default: "Public Hosted Zone Name"
HostedZoneId:
default: "Public Hosted Zone ID"
Resources:
ExtApiElb:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: !Join ["-", [!Ref InfrastructureName, "ext"]]
IpAddressType: ipv4
Subnets: !Ref PublicSubnets
Type: network
IntApiElb:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: !Join ["-", [!Ref InfrastructureName, "int"]]
Scheme: internal
IpAddressType: ipv4
Subnets: !Ref PrivateSubnets
Type: network
IntDns:
Type: "AWS::Route53::HostedZone"
Properties:
HostedZoneConfig:
Comment: "Managed by CloudFormation"
Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]]
HostedZoneTags:
- Key: Name
Value: !Join ["-", [!Ref InfrastructureName, "int"]]
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "owned"
VPCs:
- VPCId: !Ref VpcId
VPCRegion: !Ref "AWS::Region"
ExternalApiServerRecord:
Type: AWS::Route53::RecordSetGroup
Properties:
Comment: Alias record for the API server
HostedZoneId: !Ref HostedZoneId
RecordSets:
- Name:
!Join [
".",
["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]],
]
Type: A
206
CHAPTER 1. INSTALLING ON AWS
AliasTarget:
HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID
DNSName: !GetAtt ExtApiElb.DNSName
InternalApiServerRecord:
Type: AWS::Route53::RecordSetGroup
Properties:
Comment: Alias record for the API server
HostedZoneId: !Ref IntDns
RecordSets:
- Name:
!Join [
".",
["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]],
]
Type: A
AliasTarget:
HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID
DNSName: !GetAtt IntApiElb.DNSName
- Name:
!Join [
".",
["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]],
]
Type: A
AliasTarget:
HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID
DNSName: !GetAtt IntApiElb.DNSName
ExternalApiListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: forward
TargetGroupArn:
Ref: ExternalApiTargetGroup
LoadBalancerArn:
Ref: ExtApiElb
Port: 6443
Protocol: TCP
ExternalApiTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
Port: 6443
Protocol: TCP
TargetType: ip
VpcId:
Ref: VpcId
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: 60
InternalApiListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
207
OpenShift Container Platform 4.4 Installing on AWS
DefaultActions:
- Type: forward
TargetGroupArn:
Ref: InternalApiTargetGroup
LoadBalancerArn:
Ref: IntApiElb
Port: 6443
Protocol: TCP
InternalApiTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
Port: 6443
Protocol: TCP
TargetType: ip
VpcId:
Ref: VpcId
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: 60
InternalServiceInternalListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: forward
TargetGroupArn:
Ref: InternalServiceTargetGroup
LoadBalancerArn:
Ref: IntApiElb
Port: 22623
Protocol: TCP
InternalServiceTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
Port: 22623
Protocol: TCP
TargetType: ip
VpcId:
Ref: VpcId
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: 60
RegisterTargetLambdaIamRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]]
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "lambda.amazonaws.com"
208
CHAPTER 1. INSTALLING ON AWS
Action:
- "sts:AssumeRole"
Path: "/"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]]
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
[
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeregisterTargets",
]
Resource: !Ref InternalApiTargetGroup
- Effect: "Allow"
Action:
[
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeregisterTargets",
]
Resource: !Ref InternalServiceTargetGroup
- Effect: "Allow"
Action:
[
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeregisterTargets",
]
Resource: !Ref ExternalApiTargetGroup
RegisterNlbIpTargets:
Type: "AWS::Lambda::Function"
Properties:
Handler: "index.handler"
Role:
Fn::GetAtt:
- "RegisterTargetLambdaIamRole"
- "Arn"
Code:
ZipFile: |
import json
import boto3
import cfnresponse
def handler(event, context):
elb = boto3.client('elbv2')
if event['RequestType'] == 'Delete':
elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=
[{'Id': event['ResourceProperties']['TargetIp']}])
elif event['RequestType'] == 'Create':
elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id':
event['ResourceProperties']['TargetIp']}])
responseData = {}
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData,
event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp'])
Runtime: "python3.7"
Timeout: 120
209
OpenShift Container Platform 4.4 Installing on AWS
RegisterSubnetTagsLambdaIamRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]]
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "lambda.amazonaws.com"
Action:
- "sts:AssumeRole"
Path: "/"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]]
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
[
"ec2:DeleteTags",
"ec2:CreateTags"
]
Resource: "arn:aws:ec2:*:*:subnet/*"
- Effect: "Allow"
Action:
[
"ec2:DescribeSubnets",
"ec2:DescribeTags"
]
Resource: "*"
RegisterSubnetTags:
Type: "AWS::Lambda::Function"
Properties:
Handler: "index.handler"
Role:
Fn::GetAtt:
- "RegisterSubnetTagsLambdaIamRole"
- "Arn"
Code:
ZipFile: |
import json
import boto3
import cfnresponse
def handler(event, context):
ec2_client = boto3.client('ec2')
if event['RequestType'] == 'Delete':
for subnet_id in event['ResourceProperties']['Subnets']:
ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' +
event['ResourceProperties']['InfrastructureName']}]);
elif event['RequestType'] == 'Create':
for subnet_id in event['ResourceProperties']['Subnets']:
210
CHAPTER 1. INSTALLING ON AWS
RegisterPublicSubnetTags:
Type: Custom::SubnetRegister
Properties:
ServiceToken: !GetAtt RegisterSubnetTags.Arn
InfrastructureName: !Ref InfrastructureName
Subnets: !Ref PublicSubnets
RegisterPrivateSubnetTags:
Type: Custom::SubnetRegister
Properties:
ServiceToken: !GetAtt RegisterSubnetTags.Arn
InfrastructureName: !Ref InfrastructureName
Subnets: !Ref PrivateSubnets
Outputs:
PrivateHostedZoneId:
Description: Hosted zone ID for the private DNS, which is required for private records.
Value: !Ref IntDns
ExternalApiLoadBalancerName:
Description: Full name of the External API load balancer created.
Value: !GetAtt ExtApiElb.LoadBalancerFullName
InternalApiLoadBalancerName:
Description: Full name of the Internal API load balancer created.
Value: !GetAtt IntApiElb.LoadBalancerFullName
ApiServerDnsName:
Description: Full hostname of the API server, which is required for the Ignition config files.
Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]]
RegisterNlbIpTargetsLambda:
Description: Lambda ARN useful to help register or deregister IP targets for these load balancers.
Value: !GetAtt RegisterNlbIpTargets.Arn
ExternalApiTargetGroupArn:
Description: ARN of External API target group.
Value: !Ref ExternalApiTargetGroup
InternalApiTargetGroupArn:
Description: ARN of Internal API target group.
Value: !Ref InternalApiTargetGroup
InternalServiceTargetGroupArn:
Description: ARN of internal service target group.
Value: !Ref InternalServiceTargetGroup
NOTE
211
OpenShift Container Platform 4.4 Installing on AWS
NOTE
If you do not use the provided CloudFormation template to create your AWS
infrastructure, you must review the provided information and manually create the
infrastructure. If your cluster does not initialize correctly, you might have to contact Red
Hat support with your installation logs.
Prerequisites
Procedure
1. Create a JSON file that contains the parameter values that the template requires:
[
{
"ParameterKey": "InfrastructureName", 1
"ParameterValue": "mycluster-<random_string>" 2
},
{
"ParameterKey": "VpcCidr", 3
"ParameterValue": "10.0.0.0/16" 4
},
{
"ParameterKey": "PrivateSubnets", 5
"ParameterValue": "subnet-<random_string>" 6
},
{
"ParameterKey": "VpcId", 7
"ParameterValue": "vpc-<random_string>" 8
}
]
1 The name for your cluster infrastructure that is encoded in your Ignition config files for the
cluster.
2 Specify the infrastructure name that you extracted from the Ignition config file metadata,
which has the format <cluster-name>-<random-string>.
4 Specify the CIDR block parameter that you used for the VPC that you defined in the form
x.x.x.x/16-24.
6 Specify the PrivateSubnetIds value from the output of the CloudFormation template for
the VPC.
212
CHAPTER 1. INSTALLING ON AWS
8 Specify the VpcId value from the output of the CloudFormation template for the VPC.
2. Copy the template from the CloudFormation template for security objectssection of this
topic and save it as a YAML file on your computer. This template describes the security groups
and roles that your cluster requires.
IMPORTANT
1 <name> is the name for the CloudFormation stack, such as cluster-sec. You need the
name of this stack if you remove the cluster.
2 <template> is the relative path to and name of the CloudFormation template YAML file
that you saved.
3 <parameters> is the relative path to and name of the CloudFormation parameters JSON
file.
After the StackStatus displays CREATE_COMPLETE, the output displays values for the
following parameters. You must provide these parameter values to the other CloudFormation
templates that you run to create your cluster:
213
OpenShift Container Platform 4.4 Installing on AWS
You can use the following CloudFormation template to deploy the security objects that you need for
your OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM)
Parameters:
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a
maximum of 27 characters.
Description: A short, unique cluster ID used to tag cloud resources and identify items owned or
used by the cluster.
Type: String
VpcCidr:
AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4]
[0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24.
Default: 10.0.0.0/16
Description: CIDR block for VPC.
Type: String
VpcId:
Description: The VPC-scoped resources will belong to this VPC.
Type: AWS::EC2::VPC::Id
PrivateSubnets:
Description: The internal subnets.
Type: List<AWS::EC2::Subnet::Id>
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- InfrastructureName
- Label:
default: "Network Configuration"
Parameters:
- VpcId
- VpcCidr
- PrivateSubnets
ParameterLabels:
InfrastructureName:
default: "Infrastructure Name"
VpcId:
default: "VPC ID"
VpcCidr:
default: "VPC CIDR"
PrivateSubnets:
default: "Private Subnets"
Resources:
214
CHAPTER 1. INSTALLING ON AWS
MasterSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Cluster Master Security Group
SecurityGroupIngress:
- IpProtocol: icmp
FromPort: 0
ToPort: 0
CidrIp: !Ref VpcCidr
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: !Ref VpcCidr
- IpProtocol: tcp
ToPort: 6443
FromPort: 6443
CidrIp: !Ref VpcCidr
- IpProtocol: tcp
FromPort: 22623
ToPort: 22623
CidrIp: !Ref VpcCidr
VpcId: !Ref VpcId
WorkerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Cluster Worker Security Group
SecurityGroupIngress:
- IpProtocol: icmp
FromPort: 0
ToPort: 0
CidrIp: !Ref VpcCidr
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: !Ref VpcCidr
VpcId: !Ref VpcId
MasterIngressEtcd:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: etcd
FromPort: 2379
ToPort: 2380
IpProtocol: tcp
MasterIngressVxlan:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Vxlan packets
FromPort: 4789
ToPort: 4789
215
OpenShift Container Platform 4.4 Installing on AWS
IpProtocol: udp
MasterIngressWorkerVxlan:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Vxlan packets
FromPort: 4789
ToPort: 4789
IpProtocol: udp
MasterIngressInternal:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: tcp
MasterIngressWorkerInternal:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: tcp
MasterIngressKube:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Kubernetes kubelet, scheduler and controller manager
FromPort: 10250
ToPort: 10259
IpProtocol: tcp
MasterIngressWorkerKube:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes kubelet, scheduler and controller manager
FromPort: 10250
ToPort: 10259
IpProtocol: tcp
MasterIngressIngressServices:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
216
CHAPTER 1. INSTALLING ON AWS
MasterIngressWorkerIngressServices:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: tcp
WorkerIngressVxlan:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Vxlan packets
FromPort: 4789
ToPort: 4789
IpProtocol: udp
WorkerIngressWorkerVxlan:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Vxlan packets
FromPort: 4789
ToPort: 4789
IpProtocol: udp
WorkerIngressInternal:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: tcp
WorkerIngressWorkerInternal:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: tcp
217
OpenShift Container Platform 4.4 Installing on AWS
WorkerIngressKube:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes secure kubelet port
FromPort: 10250
ToPort: 10250
IpProtocol: tcp
WorkerIngressWorkerKube:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Internal Kubernetes communication
FromPort: 10250
ToPort: 10250
IpProtocol: tcp
WorkerIngressIngressServices:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: tcp
WorkerIngressWorkerIngressServices:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: tcp
MasterIamRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "ec2.amazonaws.com"
Action:
- "sts:AssumeRole"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]]
PolicyDocument:
Version: "2012-10-17"
218
CHAPTER 1. INSTALLING ON AWS
Statement:
- Effect: "Allow"
Action: "ec2:*"
Resource: "*"
- Effect: "Allow"
Action: "elasticloadbalancing:*"
Resource: "*"
- Effect: "Allow"
Action: "iam:PassRole"
Resource: "*"
- Effect: "Allow"
Action: "s3:GetObject"
Resource: "*"
MasterInstanceProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Roles:
- Ref: "MasterIamRole"
WorkerIamRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "ec2.amazonaws.com"
Action:
- "sts:AssumeRole"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]]
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action: "ec2:Describe*"
Resource: "*"
WorkerInstanceProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Roles:
- Ref: "WorkerIamRole"
Outputs:
MasterSecurityGroupId:
Description: Master Security Group ID
Value: !GetAtt MasterSecurityGroup.GroupId
WorkerSecurityGroupId:
Description: Worker Security Group ID
Value: !GetAtt WorkerSecurityGroup.GroupId
219
OpenShift Container Platform 4.4 Installing on AWS
MasterInstanceProfile:
Description: Master IAM Instance Profile
Value: !Ref MasterInstanceProfile
WorkerInstanceProfile:
Description: Worker IAM Instance Profile
Value: !Ref WorkerInstanceProfile
ap-northeast-1 ami-05f59cf6db1d591fe
ap-northeast-2 ami-06a06d31eefbb25c4
ap-south-1 ami-0247a9f45f1917aaa
ap-southeast-1 ami-0b628e07d986a6c36
ap-southeast-2 ami-0bdd5c426d91caf8e
ca-central-1 ami-0c6c7ce738fe5112b
eu-central-1 ami-0a8b58b4be8846e83
eu-north-1 ami-04e659bd9575cea3d
eu-west-1 ami-0d2e5d86e80ef2bd4
eu-west-2 ami-0a27424b3eb592b4d
eu-west-3 ami-0a8cb038a6e583bfa
me-south-1 ami-0c9d86eb9d0acee5d
sa-east-1 ami-0d020f4ea19dbc7fa
us-east-1 ami-0543fbfb4749f3c3b
us-east-2 ami-070c6257b10036038
us-west-1 ami-02b6556210798d665
220
CHAPTER 1. INSTALLING ON AWS
us-west-2 ami-0409b2cebfc3ac3d0
NOTE
If you do not use the provided CloudFormation template to create your bootstrap node,
you must review the provided information and manually create the infrastructure. If your
cluster does not initialize correctly, you might have to contact Red Hat support with your
installation logs.
Prerequisites
Procedure
1. Provide a location to serve the bootstrap.ign Ignition config file to your cluster. This file is
located in your installation directory. One way to do this is to create an S3 bucket in your
cluster’s region and upload the Ignition config file to it.
IMPORTANT
The provided CloudFormation Template assumes that the Ignition config files
for your cluster are served from an S3 bucket. If you choose to serve the files
from another location, you must modify the templates.
NOTE
The bootstrap Ignition config file does contain secrets, like X.509 keys. The
following steps provide basic security for the S3 bucket. To provide additional
security, you can enable an S3 bucket policy to allow only certain users, such as
the OpenShift IAM user, to access objects that the bucket contains. You can
avoid S3 entirely and serve your bootstrap Ignition config file from any address
that the bootstrap machine can reach.
221
OpenShift Container Platform 4.4 Installing on AWS
$ aws s3 mb s3://<cluster-name>-infra 1
$ aws s3 ls s3://<cluster-name>-infra/
2. Create a JSON file that contains the parameter values that the template requires:
[
{
"ParameterKey": "InfrastructureName", 1
"ParameterValue": "mycluster-<random_string>" 2
},
{
"ParameterKey": "RhcosAmi", 3
"ParameterValue": "ami-<random_string>" 4
},
{
"ParameterKey": "AllowedBootstrapSshCidr", 5
"ParameterValue": "0.0.0.0/0" 6
},
{
"ParameterKey": "PublicSubnet", 7
"ParameterValue": "subnet-<random_string>" 8
},
{
"ParameterKey": "MasterSecurityGroupId", 9
"ParameterValue": "sg-<random_string>" 10
},
{
"ParameterKey": "VpcId", 11
"ParameterValue": "vpc-<random_string>" 12
},
{
"ParameterKey": "BootstrapIgnitionLocation", 13
"ParameterValue": "s3://<bucket_name>/bootstrap.ign" 14
},
{
"ParameterKey": "AutoRegisterELB", 15
"ParameterValue": "yes" 16
},
{
"ParameterKey": "RegisterNlbIpTargetsLambdaArn", 17
222
CHAPTER 1. INSTALLING ON AWS
"ParameterValue": "arn:aws:lambda:<region>:<account_number>:function:
<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 18
},
{
"ParameterKey": "ExternalApiTargetGroupArn", 19
"ParameterValue": "arn:aws:elasticloadbalancing:<region>:
<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 20
},
{
"ParameterKey": "InternalApiTargetGroupArn", 21
"ParameterValue": "arn:aws:elasticloadbalancing:<region>:
<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 22
},
{
"ParameterKey": "InternalServiceTargetGroupArn", 23
"ParameterValue": "arn:aws:elasticloadbalancing:<region>:
<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 24
}
]
1 The name for your cluster infrastructure that is encoded in your Ignition config files for the
cluster.
2 Specify the infrastructure name that you extracted from the Ignition config file metadata,
which has the format <cluster-name>-<random-string>.
3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node.
7 The public subnet that is associated with your VPC to launch the bootstrap node into.
8 Specify the PublicSubnetIds value from the output of the CloudFormation template for
the VPC.
12 Specify the VpcId value from the output of the CloudFormation template for the VPC.
16 Specify yes or no. If you specify yes, you must provide a Lambda Amazon Resource Name
(ARN) value.
223
OpenShift Container Platform 4.4 Installing on AWS
3. Copy the template from the CloudFormation template for the bootstrap machinesection of
this topic and save it as a YAML file on your computer. This template describes the bootstrap
machine that your cluster requires.
IMPORTANT
1 <name> is the name for the CloudFormation stack, such as cluster-bootstrap. You need
the name of this stack if you remove the cluster.
2 <template> is the relative path to and name of the CloudFormation template YAML file
that you saved.
3 <parameters> is the relative path to and name of the CloudFormation parameters JSON
file.
After the StackStatus displays CREATE_COMPLETE, the output displays values for the
following parameters. You must provide these parameter values to the other CloudFormation
templates that you run to create your cluster:
224
CHAPTER 1. INSTALLING ON AWS
You can use the following CloudFormation template to deploy the bootstrap machine that you need for
your OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM)
Parameters:
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a
maximum of 27 characters.
Description: A short, unique cluster ID used to tag cloud resources and identify items owned or
used by the cluster.
Type: String
RhcosAmi:
Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap.
Type: AWS::EC2::Image::Id
AllowedBootstrapSshCidr:
AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4]
[0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))$
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32.
Default: 0.0.0.0/0
Description: CIDR block to allow SSH access to the bootstrap node.
Type: String
PublicSubnet:
Description: The public subnet to launch the bootstrap node into.
Type: AWS::EC2::Subnet::Id
MasterSecurityGroupId:
Description: The master security group ID for registering temporary rules.
Type: AWS::EC2::SecurityGroup::Id
VpcId:
Description: The VPC-scoped resources will belong to this VPC.
Type: AWS::EC2::VPC::Id
BootstrapIgnitionLocation:
Default: s3://my-s3-bucket/bootstrap.ign
Description: Ignition config file location.
Type: String
AutoRegisterELB:
Default: "yes"
AllowedValues:
225
OpenShift Container Platform 4.4 Installing on AWS
- "yes"
- "no"
Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter?
Type: String
RegisterNlbIpTargetsLambdaArn:
Description: ARN for NLB IP target registration lambda.
Type: String
ExternalApiTargetGroupArn:
Description: ARN for external API load balancer target group.
Type: String
InternalApiTargetGroupArn:
Description: ARN for internal API load balancer target group.
Type: String
InternalServiceTargetGroupArn:
Description: ARN for internal service load balancer target group.
Type: String
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- InfrastructureName
- Label:
default: "Host Information"
Parameters:
- RhcosAmi
- BootstrapIgnitionLocation
- MasterSecurityGroupId
- Label:
default: "Network Configuration"
Parameters:
- VpcId
- AllowedBootstrapSshCidr
- PublicSubnet
- Label:
default: "Load Balancer Automation"
Parameters:
- AutoRegisterELB
- RegisterNlbIpTargetsLambdaArn
- ExternalApiTargetGroupArn
- InternalApiTargetGroupArn
- InternalServiceTargetGroupArn
ParameterLabels:
InfrastructureName:
default: "Infrastructure Name"
VpcId:
default: "VPC ID"
AllowedBootstrapSshCidr:
default: "Allowed SSH Source"
PublicSubnet:
default: "Public Subnet"
RhcosAmi:
default: "Red Hat Enterprise Linux CoreOS AMI ID"
BootstrapIgnitionLocation:
226
CHAPTER 1. INSTALLING ON AWS
Conditions:
DoRegistration: !Equals ["yes", !Ref AutoRegisterELB]
Resources:
BootstrapIamRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "ec2.amazonaws.com"
Action:
- "sts:AssumeRole"
Path: "/"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]]
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action: "ec2:Describe*"
Resource: "*"
- Effect: "Allow"
Action: "ec2:AttachVolume"
Resource: "*"
- Effect: "Allow"
Action: "ec2:DetachVolume"
Resource: "*"
- Effect: "Allow"
Action: "s3:GetObject"
Resource: "*"
BootstrapInstanceProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Path: "/"
Roles:
- Ref: "BootstrapIamRole"
BootstrapSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Cluster Bootstrap Security Group
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 22
ToPort: 22
227
OpenShift Container Platform 4.4 Installing on AWS
BootstrapInstance:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
IamInstanceProfile: !Ref BootstrapInstanceProfile
InstanceType: "i3.large"
NetworkInterfaces:
- AssociatePublicIpAddress: "true"
DeviceIndex: "0"
GroupSet:
- !Ref "BootstrapSecurityGroup"
- !Ref "MasterSecurityGroupId"
SubnetId: !Ref "PublicSubnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"replace":{"source":"${S3Loc}","verification":{}}},"timeouts":
{},"version":"2.1.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}'
-{
S3Loc: !Ref BootstrapIgnitionLocation
}
RegisterBootstrapApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref ExternalApiTargetGroupArn
TargetIp: !GetAtt BootstrapInstance.PrivateIp
RegisterBootstrapInternalApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalApiTargetGroupArn
TargetIp: !GetAtt BootstrapInstance.PrivateIp
RegisterBootstrapInternalServiceTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalServiceTargetGroupArn
TargetIp: !GetAtt BootstrapInstance.PrivateIp
Outputs:
BootstrapInstanceId:
Description: Bootstrap Instance ID.
Value: !Ref BootstrapInstance
228
CHAPTER 1. INSTALLING ON AWS
BootstrapPublicIp:
Description: The bootstrap node public IP address.
Value: !GetAtt BootstrapInstance.PublicIp
BootstrapPrivateIp:
Description: The bootstrap node private IP address.
Value: !GetAtt BootstrapInstance.PrivateIp
NOTE
If you do not use the provided CloudFormation template to create your control plane
nodes, you must review the provided information and manually create the infrastructure.
If your cluster does not initialize correctly, you might have to contact Red Hat support
with your installation logs.
Prerequisites
Procedure
1. Create a JSON file that contains the parameter values that the template requires:
[
{
"ParameterKey": "InfrastructureName", 1
"ParameterValue": "mycluster-<random_string>" 2
},
{
"ParameterKey": "RhcosAmi", 3
"ParameterValue": "ami-<random_string>" 4
},
{
"ParameterKey": "AutoRegisterDNS", 5
"ParameterValue": "yes" 6
},
{
"ParameterKey": "PrivateHostedZoneId", 7
229
OpenShift Container Platform 4.4 Installing on AWS
"ParameterValue": "<random_string>" 8
},
{
"ParameterKey": "PrivateHostedZoneName", 9
"ParameterValue": "mycluster.example.com" 10
},
{
"ParameterKey": "Master0Subnet", 11
"ParameterValue": "subnet-<random_string>" 12
},
{
"ParameterKey": "Master1Subnet", 13
"ParameterValue": "subnet-<random_string>" 14
},
{
"ParameterKey": "Master2Subnet", 15
"ParameterValue": "subnet-<random_string>" 16
},
{
"ParameterKey": "MasterSecurityGroupId", 17
"ParameterValue": "sg-<random_string>" 18
},
{
"ParameterKey": "IgnitionLocation", 19
"ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master"
20
},
{
"ParameterKey": "CertificateAuthorities", 21
"ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 22
},
{
"ParameterKey": "MasterInstanceProfileName", 23
"ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" 24
},
{
"ParameterKey": "MasterInstanceType", 25
"ParameterValue": "m4.xlarge" 26
},
{
"ParameterKey": "AutoRegisterELB", 27
"ParameterValue": "yes" 28
},
{
"ParameterKey": "RegisterNlbIpTargetsLambdaArn", 29
"ParameterValue": "arn:aws:lambda:<region>:<account_number>:function:
<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 30
},
{
"ParameterKey": "ExternalApiTargetGroupArn", 31
"ParameterValue": "arn:aws:elasticloadbalancing:<region>:
<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 32
},
230
CHAPTER 1. INSTALLING ON AWS
{
"ParameterKey": "InternalApiTargetGroupArn", 33
"ParameterValue": "arn:aws:elasticloadbalancing:<region>:
<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 34
},
{
"ParameterKey": "InternalServiceTargetGroupArn", 35
"ParameterValue": "arn:aws:elasticloadbalancing:<region>:
<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 36
}
]
1 The name for your cluster infrastructure that is encoded in your Ignition config files for the
cluster.
2 Specify the infrastructure name that you extracted from the Ignition config file metadata,
which has the format <cluster-name>-<random-string>.
3 CurrentRed Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane
machines.
6 Specify yes or no. If you specify yes, you must provide Hosted Zone information.
8 Specify the PrivateHostedZoneId value from the output of the CloudFormation template
for DNS and load balancing.
12 14 16 Specify a subnet from the PrivateSubnets value from the output of the
CloudFormation template for DNS and load balancing.
22 Specify the value from the master.ign file that is in the installation directory. This value is
the long string with the format data:text/plain;charset=utf-8;base64,ABC…xYz==.
231
OpenShift Container Platform 4.4 Installing on AWS
25 The type of AWS instance to use for the control plane machines.
26 Allowed values:
m4.xlarge
m4.2xlarge
m4.4xlarge
m4.8xlarge
m4.10xlarge
m4.16xlarge
c4.2xlarge
c4.4xlarge
c4.8xlarge
r4.xlarge
r4.2xlarge
r4.4xlarge
r4.8xlarge
r4.16xlarge
IMPORTANT
If m4 instance types are not available in your region, such as with eu-
west-3, specify an m5 type, such as m5.xlarge, instead.
28 Specify yes or no. If you specify yes, you must provide a Lambda Amazon Resource Name
(ARN) value.
232
CHAPTER 1. INSTALLING ON AWS
2. Copy the template from the CloudFormation template for control plane machinessection of
this topic and save it as a YAML file on your computer. This template describes the control plane
machines that your cluster requires.
3. If you specified an m5 instance type as the value for MasterInstanceType, add that instance
type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template.
IMPORTANT
1 <name> is the name for the CloudFormation stack, such as cluster-control-plane. You
need the name of this stack if you remove the cluster.
2 <template> is the relative path to and name of the CloudFormation template YAML file
that you saved.
3 <parameters> is the relative path to and name of the CloudFormation parameters JSON
file.
You can use the following CloudFormation template to deploy the control plane machines that you need
for your OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Node Launch (EC2 master instances)
Parameters:
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
233
OpenShift Container Platform 4.4 Installing on AWS
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a
maximum of 27 characters.
Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider.
Type: String
RhcosAmi:
Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap.
Type: AWS::EC2::Image::Id
AutoRegisterDNS:
Default: "yes"
AllowedValues:
- "yes"
- "no"
Description: Do you want to invoke DNS etcd registration, which requires Hosted Zone
information?
Type: String
PrivateHostedZoneId:
Description: The Route53 private zone ID to register the etcd targets with, such as
Z21IXYZABCZ2A4.
Type: String
PrivateHostedZoneName:
Description: The Route53 zone to register the targets with, such as cluster.example.com. Omit the
trailing period.
Type: String
Master0Subnet:
Description: The subnets, recommend private, to launch the master nodes into.
Type: AWS::EC2::Subnet::Id
Master1Subnet:
Description: The subnets, recommend private, to launch the master nodes into.
Type: AWS::EC2::Subnet::Id
Master2Subnet:
Description: The subnets, recommend private, to launch the master nodes into.
Type: AWS::EC2::Subnet::Id
MasterSecurityGroupId:
Description: The master security group ID to associate with master nodes.
Type: AWS::EC2::SecurityGroup::Id
IgnitionLocation:
Default: https://api-int.$CLUSTER_NAME.$DOMAIN:22623/config/master
Description: Ignition config file location.
Type: String
CertificateAuthorities:
Default: data:text/plain;charset=utf-8;base64,ABC...xYz==
Description: Base64 encoded certificate authority string to use.
Type: String
MasterInstanceProfileName:
Description: IAM profile to associate with master nodes.
Type: String
MasterInstanceType:
Default: m4.xlarge
Type: String
AllowedValues:
- "m4.xlarge"
- "m4.2xlarge"
- "m4.4xlarge"
- "m4.8xlarge"
- "m4.10xlarge"
234
CHAPTER 1. INSTALLING ON AWS
- "m4.16xlarge"
- "c4.2xlarge"
- "c4.4xlarge"
- "c4.8xlarge"
- "r4.xlarge"
- "r4.2xlarge"
- "r4.4xlarge"
- "r4.8xlarge"
- "r4.16xlarge"
AutoRegisterELB:
Default: "yes"
AllowedValues:
- "yes"
- "no"
Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter?
Type: String
RegisterNlbIpTargetsLambdaArn:
Description: ARN for NLB IP target registration lambda. Supply the value from the cluster
infrastructure or select "no" for AutoRegisterELB.
Type: String
ExternalApiTargetGroupArn:
Description: ARN for external API load balancer target group. Supply the value from the cluster
infrastructure or select "no" for AutoRegisterELB.
Type: String
InternalApiTargetGroupArn:
Description: ARN for internal API load balancer target group. Supply the value from the cluster
infrastructure or select "no" for AutoRegisterELB.
Type: String
InternalServiceTargetGroupArn:
Description: ARN for internal service load balancer target group. Supply the value from the cluster
infrastructure or select "no" for AutoRegisterELB.
Type: String
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- InfrastructureName
- Label:
default: "Host Information"
Parameters:
- MasterInstanceType
- RhcosAmi
- IgnitionLocation
- CertificateAuthorities
- MasterSecurityGroupId
- MasterInstanceProfileName
- Label:
default: "Network Configuration"
Parameters:
- VpcId
- AllowedBootstrapSshCidr
- Master0Subnet
- Master1Subnet
235
OpenShift Container Platform 4.4 Installing on AWS
- Master2Subnet
- Label:
default: "DNS"
Parameters:
- AutoRegisterDNS
- PrivateHostedZoneName
- PrivateHostedZoneId
- Label:
default: "Load Balancer Automation"
Parameters:
- AutoRegisterELB
- RegisterNlbIpTargetsLambdaArn
- ExternalApiTargetGroupArn
- InternalApiTargetGroupArn
- InternalServiceTargetGroupArn
ParameterLabels:
InfrastructureName:
default: "Infrastructure Name"
VpcId:
default: "VPC ID"
Master0Subnet:
default: "Master-0 Subnet"
Master1Subnet:
default: "Master-1 Subnet"
Master2Subnet:
default: "Master-2 Subnet"
MasterInstanceType:
default: "Master Instance Type"
MasterInstanceProfileName:
default: "Master Instance Profile Name"
RhcosAmi:
default: "Red Hat Enterprise Linux CoreOS AMI ID"
BootstrapIgnitionLocation:
default: "Master Ignition Source"
CertificateAuthorities:
default: "Ignition CA String"
MasterSecurityGroupId:
default: "Master Security Group ID"
AutoRegisterDNS:
default: "Use Provided DNS Automation"
AutoRegisterELB:
default: "Use Provided ELB Automation"
PrivateHostedZoneName:
default: "Private Hosted Zone Name"
PrivateHostedZoneId:
default: "Private Hosted Zone ID"
Conditions:
DoRegistration: !Equals ["yes", !Ref AutoRegisterELB]
DoDns: !Equals ["yes", !Ref AutoRegisterDNS]
Resources:
Master0:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
236
CHAPTER 1. INSTALLING ON AWS
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeSize: "120"
VolumeType: "gp2"
IamInstanceProfile: !Ref MasterInstanceProfileName
InstanceType: !Ref MasterInstanceType
NetworkInterfaces:
- AssociatePublicIpAddress: "false"
DeviceIndex: "0"
GroupSet:
- !Ref "MasterSecurityGroupId"
SubnetId: !Ref "Master0Subnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"append":[{"source":"${SOURCE}","verification":{}}]},"security":{"tls":
{"certificateAuthorities":[{"source":"${CA_BUNDLE}","verification":{}}]}},"timeouts":
{},"version":"2.2.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}'
-{
SOURCE: !Ref IgnitionLocation,
CA_BUNDLE: !Ref CertificateAuthorities,
}
Tags:
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "shared"
RegisterMaster0:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref ExternalApiTargetGroupArn
TargetIp: !GetAtt Master0.PrivateIp
RegisterMaster0InternalApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalApiTargetGroupArn
TargetIp: !GetAtt Master0.PrivateIp
RegisterMaster0InternalServiceTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalServiceTargetGroupArn
TargetIp: !GetAtt Master0.PrivateIp
Master1:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
BlockDeviceMappings:
- DeviceName: /dev/xvda
237
OpenShift Container Platform 4.4 Installing on AWS
Ebs:
VolumeSize: "120"
VolumeType: "gp2"
IamInstanceProfile: !Ref MasterInstanceProfileName
InstanceType: !Ref MasterInstanceType
NetworkInterfaces:
- AssociatePublicIpAddress: "false"
DeviceIndex: "0"
GroupSet:
- !Ref "MasterSecurityGroupId"
SubnetId: !Ref "Master1Subnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"append":[{"source":"${SOURCE}","verification":{}}]},"security":{"tls":
{"certificateAuthorities":[{"source":"${CA_BUNDLE}","verification":{}}]}},"timeouts":
{},"version":"2.2.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}'
-{
SOURCE: !Ref IgnitionLocation,
CA_BUNDLE: !Ref CertificateAuthorities,
}
Tags:
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "shared"
RegisterMaster1:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref ExternalApiTargetGroupArn
TargetIp: !GetAtt Master1.PrivateIp
RegisterMaster1InternalApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalApiTargetGroupArn
TargetIp: !GetAtt Master1.PrivateIp
RegisterMaster1InternalServiceTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalServiceTargetGroupArn
TargetIp: !GetAtt Master1.PrivateIp
Master2:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeSize: "120"
238
CHAPTER 1. INSTALLING ON AWS
VolumeType: "gp2"
IamInstanceProfile: !Ref MasterInstanceProfileName
InstanceType: !Ref MasterInstanceType
NetworkInterfaces:
- AssociatePublicIpAddress: "false"
DeviceIndex: "0"
GroupSet:
- !Ref "MasterSecurityGroupId"
SubnetId: !Ref "Master2Subnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"append":[{"source":"${SOURCE}","verification":{}}]},"security":{"tls":
{"certificateAuthorities":[{"source":"${CA_BUNDLE}","verification":{}}]}},"timeouts":
{},"version":"2.2.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}'
-{
SOURCE: !Ref IgnitionLocation,
CA_BUNDLE: !Ref CertificateAuthorities,
}
Tags:
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "shared"
RegisterMaster2:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref ExternalApiTargetGroupArn
TargetIp: !GetAtt Master2.PrivateIp
RegisterMaster2InternalApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalApiTargetGroupArn
TargetIp: !GetAtt Master2.PrivateIp
RegisterMaster2InternalServiceTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalServiceTargetGroupArn
TargetIp: !GetAtt Master2.PrivateIp
EtcdSrvRecords:
Condition: DoDns
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: !Ref PrivateHostedZoneId
Name: !Join [".", ["_etcd-server-ssl._tcp", !Ref PrivateHostedZoneName]]
ResourceRecords:
- !Join [
" ",
["0 10 2380", !Join [".", ["etcd-0", !Ref PrivateHostedZoneName]]],
239
OpenShift Container Platform 4.4 Installing on AWS
]
- !Join [
" ",
["0 10 2380", !Join [".", ["etcd-1", !Ref PrivateHostedZoneName]]],
]
- !Join [
" ",
["0 10 2380", !Join [".", ["etcd-2", !Ref PrivateHostedZoneName]]],
]
TTL: 60
Type: SRV
Etcd0Record:
Condition: DoDns
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: !Ref PrivateHostedZoneId
Name: !Join [".", ["etcd-0", !Ref PrivateHostedZoneName]]
ResourceRecords:
- !GetAtt Master0.PrivateIp
TTL: 60
Type: A
Etcd1Record:
Condition: DoDns
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: !Ref PrivateHostedZoneId
Name: !Join [".", ["etcd-1", !Ref PrivateHostedZoneName]]
ResourceRecords:
- !GetAtt Master1.PrivateIp
TTL: 60
Type: A
Etcd2Record:
Condition: DoDns
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: !Ref PrivateHostedZoneId
Name: !Join [".", ["etcd-2", !Ref PrivateHostedZoneName]]
ResourceRecords:
- !GetAtt Master2.PrivateIp
TTL: 60
Type: A
Outputs:
PrivateIPs:
Description: The control-plane node private IP addresses.
Value:
!Join [
",",
[!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp]
]
240
CHAPTER 1. INSTALLING ON AWS
After you create all of the required infrastructure in Amazon Web Services (AWS), you can install the
cluster.
Prerequisites
If you plan to manually manage the worker machines, create the worker machines.
Procedure
1. Change to the directory that contains the installation program and run the following command:
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2 To view different installation details, specify warn, debug, or error instead of info.
If the command exits without a FATAL warning, your production control plane has initialized.
You can create worker nodes in Amazon Web Services (AWS) for your cluster to use. The easiest way to
manually create these nodes is to modify the provided CloudFormation template.
IMPORTANT
The CloudFormation template creates a stack that represents one worker machine. You
must create a stack for each worker machine.
NOTE
If you do not use the provided CloudFormation template to create your worker nodes,
you must review the provided information and manually create the infrastructure. If your
cluster does not initialize correctly, you might have to contact Red Hat support with your
installation logs.
Prerequisites
241
OpenShift Container Platform 4.4 Installing on AWS
Procedure
1. Create a JSON file that contains the parameter values that the CloudFormation template
requires:
[
{
"ParameterKey": "InfrastructureName", 1
"ParameterValue": "mycluster-<random_string>" 2
},
{
"ParameterKey": "RhcosAmi", 3
"ParameterValue": "ami-<random_string>" 4
},
{
"ParameterKey": "Subnet", 5
"ParameterValue": "subnet-<random_string>" 6
},
{
"ParameterKey": "WorkerSecurityGroupId", 7
"ParameterValue": "sg-<random_string>" 8
},
{
"ParameterKey": "IgnitionLocation", 9
"ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker"
10
},
{
"ParameterKey": "CertificateAuthorities", 11
"ParameterValue": "" 12
},
{
"ParameterKey": "WorkerInstanceProfileName", 13
"ParameterValue": "" 14
},
{
"ParameterKey": "WorkerInstanceType", 15
"ParameterValue": "m4.large" 16
}
]
242
CHAPTER 1. INSTALLING ON AWS
1 The name for your cluster infrastructure that is encoded in your Ignition config files for the
cluster.
2 Specify the infrastructure name that you extracted from the Ignition config file metadata,
which has the format <cluster-name>-<random-string>.
3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes.
6 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation
template for DNS and load balancing.
12 Specify the value from the worker.ign file that is in the installation directory. This value is
the long string with the format data:text/plain;charset=utf-8;base64,ABC…xYz==.
15 The type of AWS instance to use for the control plane machines.
16 Allowed values:
m4.large
m4.xlarge
m4.2xlarge
m4.4xlarge
m4.8xlarge
m4.10xlarge
m4.16xlarge
c4.large
c4.xlarge
c4.2xlarge
243
OpenShift Container Platform 4.4 Installing on AWS
c4.4xlarge
c4.8xlarge
r4.large
r4.xlarge
r4.2xlarge
r4.4xlarge
r4.8xlarge
r4.16xlarge
IMPORTANT
If m4 instance types are not available in your region, such as with eu-
west-3, use m5 types instead.
2. Copy the template from the CloudFormation template for worker machines section of this
topic and save it as a YAML file on your computer. This template describes the networking
objects and load balancers that your cluster requires.
3. If you specified an m5 instance type as the value for WorkerInstanceType, add that instance
type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template.
IMPORTANT
1 <name> is the name for the CloudFormation stack, such as cluster-workers. You
need the name of this stack if you remove the cluster.
2 <template> is the relative path to and name of the CloudFormation template YAML
file that you saved.
244
CHAPTER 1. INSTALLING ON AWS
5. Continue to create worker stacks until you have created enough worker Machines for your
cluster.
IMPORTANT
You must create at least two worker machines, so you must create at least two
stacks that use this CloudFormation template.
You can use the following CloudFormation template to deploy the worker machines that you need for
your OpenShift Container Platform cluster.
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Node Launch (EC2 worker instance)
Parameters:
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a
maximum of 27 characters.
Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider.
Type: String
RhcosAmi:
Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap.
Type: AWS::EC2::Image::Id
Subnet:
Description: The subnets, recommend private, to launch the master nodes into.
Type: AWS::EC2::Subnet::Id
WorkerSecurityGroupId:
Description: The master security group ID to associate with master nodes.
Type: AWS::EC2::SecurityGroup::Id
IgnitionLocation:
Default: https://api-int.$CLUSTER_NAME.$DOMAIN:22623/config/worker
Description: Ignition config file location.
Type: String
CertificateAuthorities:
Default: data:text/plain;charset=utf-8;base64,ABC...xYz==
Description: Base64 encoded certificate authority string to use.
Type: String
WorkerInstanceProfileName:
Description: IAM profile to associate with master nodes.
Type: String
WorkerInstanceType:
Default: m4.large
Type: String
AllowedValues:
- "m4.large"
- "m4.xlarge"
- "m4.2xlarge"
- "m4.4xlarge"
- "m4.8xlarge"
- "m4.10xlarge"
245
OpenShift Container Platform 4.4 Installing on AWS
- "m4.16xlarge"
- "c4.large"
- "c4.xlarge"
- "c4.2xlarge"
- "c4.4xlarge"
- "c4.8xlarge"
- "r4.large"
- "r4.xlarge"
- "r4.2xlarge"
- "r4.4xlarge"
- "r4.8xlarge"
- "r4.16xlarge"
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- InfrastructureName
- Label:
default: "Host Information"
Parameters:
- WorkerInstanceType
- RhcosAmi
- IgnitionLocation
- CertificateAuthorities
- WorkerSecurityGroupId
- WorkerInstanceProfileName
- Label:
default: "Network Configuration"
Parameters:
- Subnet
ParameterLabels:
Subnet:
default: "Subnet"
InfrastructureName:
default: "Infrastructure Name"
WorkerInstanceType:
default: "Worker Instance Type"
WorkerInstanceProfileName:
default: "Worker Instance Profile Name"
RhcosAmi:
default: "Red Hat Enterprise Linux CoreOS AMI ID"
IgnitionLocation:
default: "Worker Ignition Source"
CertificateAuthorities:
default: "Ignition CA String"
WorkerSecurityGroupId:
default: "Worker Security Group ID"
Resources:
Worker0:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
246
CHAPTER 1. INSTALLING ON AWS
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeSize: "120"
VolumeType: "gp2"
IamInstanceProfile: !Ref WorkerInstanceProfileName
InstanceType: !Ref WorkerInstanceType
NetworkInterfaces:
- AssociatePublicIpAddress: "false"
DeviceIndex: "0"
GroupSet:
- !Ref "WorkerSecurityGroupId"
SubnetId: !Ref "Subnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"append":[{"source":"${SOURCE}","verification":{}}]},"security":{"tls":
{"certificateAuthorities":[{"source":"${CA_BUNDLE}","verification":{}}]}},"timeouts":
{},"version":"2.2.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}'
-{
SOURCE: !Ref IgnitionLocation,
CA_BUNDLE: !Ref CertificateAuthorities,
}
Tags:
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "shared"
Outputs:
PrivateIP:
Description: The compute node private IP address.
Value: !GetAtt Worker0.PrivateIp
Prerequisites
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2. Verify you can run oc commands successfully using the exported configuration:
247
OpenShift Container Platform 4.4 Installing on AWS
$ oc whoami
system:admin
Prerequisites
Procedure
$ oc get nodes
2. Review the pending certificate signing requests (CSRs) and ensure that the you see a client and
server request with Pending or Approved status for each machine that you added to the
cluster:
$ oc get csr
In this example, two machines are joining the cluster. You might see more approved CSRs in the
list.
3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
248
CHAPTER 1. INSTALLING ON AWS
3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending status, approve the CSRs for your cluster machines:
NOTE
Because the CSRs rotate automatically, approve your CSRs within an hour of
adding the machines to the cluster. If you do not approve them within an hour, the
certificates will rotate, and more than two certificates will be present for each
node. You must approve all of these certificates. After you approve the initial
CSRs, the subsequent node client CSRs are automatically approved by the
cluster kube-controller-manager. You must implement a method of
automatically approving the kubelet serving certificate requests.
To approve them individually, run the following command for each valid CSR:
Prerequisites
Procedure
249
OpenShift Container Platform 4.4 Installing on AWS
Amazon Web Services provides default storage, which means the image-registry Operator is available
after installation. However, if the Registry Operator cannot create an S3 bucket and automatically
configure storage, you must manually configure registry storage.
Instructions for both configuring a PersistentVolume, which is required for production clusters, and for
configuring an empty directory as the storage location, which is available for only non-production
clusters, are shown.
During installation, your cloud credentials are sufficient to create an S3 bucket and the Registry
Operator will automatically configure storage.
If the Registry Operator cannot create an S3 bucket, and automatically configure storage, you can
create an S3 bucket and configure storage with the following procedure.
Prerequisites
REGISTRY_STORAGE_S3_ACCESSKEY
REGISTRY_STORAGE_S3_SECRETKEY
Procedure
Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically
configure storage.
1. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old.
$ oc edit configs.imageregistry.operator.openshift.io/cluster
storage:
250
CHAPTER 1. INSTALLING ON AWS
s3:
bucket: <bucket-name>
region: <region-name>
WARNING
To secure your registry images in AWS, block public access to the S3 bucket.
You must configure storage for the image registry Operator. For non-production clusters, you can set
the image registry to an empty directory. If you do so, all images are lost if you restart the registry.
Procedure
WARNING
If you run this command before the Image Registry Operator initializes its components, the oc
patch command fails with the following error:
Prerequisites
Procedure
1. Delete the bootstrap resources. If you used the CloudFormation template, delete its stack :
251
OpenShift Container Platform 4.4 Installing on AWS
Prerequisites
You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that
uses infrastructure that you provisioned.
Download the AWS CLI and install it on your computer. See Install the AWS CLI Using the
Bundled Installer (Linux, macOS, or Unix).
Procedure
To create specific records, you must create a record for each route that your cluster uses, as
shown in the output of the following command:
2. Retrieve the Ingress Operator load balancer status and note the value of the external IP address
that it uses, which is shown in the EXTERNAL-IP column:
252
CHAPTER 1. INSTALLING ON AWS
"<external_ip>").CanonicalHostedZoneNameID' 1
Z3AADJGX6KTTL2
1 For <external_ip>, specify the value of the external IP address of the Ingress Operator
load balancer that you obtained.
The output of this command is the load balancer hosted zone ID.
/hostedzone/Z3URY6TWQ91KVV
1 2 For <domain_name>, specify the Route53 base domain for your OpenShift Container
Platform cluster.
The public hosted zone ID for your domain is shown in the command output. In this example, it is
Z3URY6TWQ91KVV.
1 For <private_hosted_zone_id>, specify the value from the output of the CloudFormation
template for DNS and load balancing.
2 For <cluster_domain>, specify the domain or subdomain that you use with your
OpenShift Container Platform cluster.
3 For <hosted_zone_id>, specify the public hosted zone ID for the load balancer that you
obtained.
253
OpenShift Container Platform 4.4 Installing on AWS
4 For <external_ip>, specify the value of the external IP address of the Ingress Operator
load balancer. Ensure that you include the trailing period (.) in this parameter value.
1 For <public_hosted_zone_id>, specify the public hosted zone for your domain.
2 For <cluster_domain>, specify the domain or subdomain that you use with your
OpenShift Container Platform cluster.
3 For <hosted_zone_id>, specify the public hosted zone ID for the load balancer that you
obtained.
4 For <external_ip>, specify the value of the external IP address of the Ingress Operator
load balancer. Ensure that you include the trailing period (.) in this parameter value.
Prerequisites
Removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned
AWS infrastructure.
Procedure
254
CHAPTER 1. INSTALLING ON AWS
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
IMPORTANT
The Ignition config files that the installation program generates contain
certificates that expire after 24 hours. You must keep the cluster running for 24
hours in a non-degraded state to ensure that the first certificate rotation has
finished.
Next steps
Prerequisites
Have a copy of the installation program that you used to deploy the cluster.
Have the files that the installation program generated when you created your cluster.
Procedure
1. From the computer that you used to install the cluster, run the following command:
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
NOTE
You must specify the directory that contains the cluster definition files for your
cluster. The installation program requires the metadata.json file in this directory
to delete the cluster.
2. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform
255
OpenShift Container Platform 4.4 Installing on AWS
2. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform
installation program.
256