12 - Kubespray
12 - Kubespray
(CKA)
Behrad Eslamifar
b.eslamifar@gmail.com
Installing Kubernetes
with Kubespray
Kubespray
●
https://kubernetes.io/docs/setup/production-environment/tools/kubespray/
●
https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md
Creating a Cluster
1/5 Requirement
● Ansible v2.9 and python-netaddr is installed on the machine that will run Ansible
commands
● Jinja 2.11 (or newer) is required to run the Ansible Playbooks
● The target servers must have access to the Internet in order to pull docker
images. Otherwise, additional configuration is required
● The target servers are configured to allow IPv4 forwarding
● Your ssh key must be copied to all the servers part of your inventory
● The firewalls are not managed, you'll need to implement your own rules the
way you used to.
● If kubespray is ran from non-root user account, correct privilege escalation
method should be configured in the target servers. Then the ansible_become
flag or command parameters --become or -b should be specified
●
https://kubernetes.io/docs/setup/production-environment/tools/kubespray/
2/5 Inventory
$ cp -r inventory/sample inventory/anisa
$ CONFIG_FILE=./inventory/cka/hosts.yaml \
python3 contrib/inventory_builder/inventory.py \
master-1,192.168.43.202 worker-1,192.168.43.203 worker-2,192.168.43.204
2/5 Inventory (Review Inventory)
● kube-master $ vi inventory/anisa/hosts.yaml
all:
○ Only master node, 1,3,5,… node hosts:
master-1:
● kube-node ansible_host: 192.168.43.202
ip: 192.168.43.202
access_ip: 192.168.43.202
○ Only worker node worker-1:
● etcd ansible_host: 192.168.43.203
ip: 192.168.43.203
access_ip: 192.168.43.203
○ List master node or seprated hosts children:
kube-master:
for etcd hosts:
master-1:
kube-node:
hosts:
worker-1:
worker-2:
etcd:
hosts:
master-1:
...
3/5 Plan your cluster deployment
●
https://kubernetes.io/docs/setup/production-environment/tools/kubespray/
3/5 Choice Deployment Mode
$ vi inventory/sample/group_vars/all/all.yml $ vi inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml
## Experimental kubeadm etcd deployment mode. Available ...
## only for new deployment ## Settings for containerized control plane (kubelet/secrets)
etcd_kubeadm_enabled: false kubelet_deployment_type: host
helm_deployment_type: host
## External LB example config
# loadbalancer_apiserver: # Enable kubeadm experimental control plane
# address: 1.2.3.4 kubeadm_control_plane: false
# port: 1234 kubeadm_certificate_key: "{{ lookup('password',
credentials_dir + '/kubeadm_certificate_key.creds
## Internal loadbalancers for apiservers length=64 chars=hexdigits') | lower }}"
# loadbalancer_apiserver_localhost: true
# valid options are "nginx" or "haproxy" # K8s image pull policy (imagePullPolicy)
# loadbalancer_apiserver_type: nginx k8s_image_pull_policy: IfNotPresent
...
## Local loadbalancer should use this port
## And must be set port 6443
loadbalancer_apiserver_port: 6443
## Refer to roles/kubespray-defaults/defaults/main.yml
## before modifying no_proxy
# no_proxy: ""
3/5 CNI (Networking) Plugin
$ vi inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml $ vi inventory/sample/group_vars/k8s-cluster/k8s-net-calico.yml
... ...
# Choose network plugin (cilium, calico, contiv, weave or # Choose data store type for calico: "etcd" or
# flannel. Use cni for generic cni plugin) Can also be set # "kdd" (kubernetes datastore)
# to 'cloud', which lets the cloud provider setup appropriate calico_datastore: "kdd"
# routing
kube_network_plugin: calico # IP in IP and VXLAN is mutualy exclusive modes.
# set IP in IP encapsulation mode: "Always",
# Setting multi_networking to true will install Multus: # "CrossSubnet", "Never"
# https://github.com/intel/multus-cni calico_ipip_mode: 'CrossSubnet'
kube_network_plugin_multus: false
# set VXLAN encapsulation mode: "Always",
# Kubernetes internal network for services, unused block # "CrossSubnet", "Never"
# of space. # calico_vxlan_mode: 'Never'
kube_service_addresses: 10.233.0.0/18 ...
$ vi inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml $ vi inventory/sample/group_vars/all/all.yml
... ...
# Kubernetes cluster name, also will be used as DNS domain ## Upstream dns servers
cluster_name: cluster.local upstream_dns_servers:
# Subdomains of DNS domain to be resolved via - 8.8.8.8
# /etc/resolv.conf for hostnet pods - 8.8.4.4
ndots: 2
# Can be coredns, coredns_dual, manual or none ...
dns_mode: coredns
# Set manual server if using a custom cluster DNS server
# manual_dns_server: 10.x.x.x
# Enable nodelocal dns cache
enable_nodelocaldns: true
nodelocaldns_ip: 169.254.25.10
nodelocaldns_health_port: 9254
$ vi inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml $ vi inventory/sample/group_vars/etcd.yml
... ...
## Settings for containerized control plane (kubelet/secrets) ## Set level of detail for etcd exported metrics,
kubelet_deployment_type: host ## specify 'extensive' to include histogram metrics.
helm_deployment_type: host # etcd_metrics: basic
# Enable kubeadm experimental control plane ## Settings for etcd deployment type (host or docker)
kubeadm_control_plane: false etcd_deployment_type: docker
kubeadm_certificate_key: "{{ lookup('password', ...
credentials_dir + '/kubeadm_certificate_key.creds
length=64 chars=hexdigits') | lower }}"
# kubernetes image repo define ## Used to set docker daemon iptables options to true
kube_image_repo: "k8s.gcr.io" docker_iptables_enabled: "false"
...
# Docker log options
# Rotate container stderr/stdout logs at 50m and keep last 5
$ vi inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml docker_log_opts: "--log-opt max-size=50m --log-opt max-file=5"
...
## Container runtime ## Add other registry,example China registry mirror.
## docker for docker, crio for cri-o and containerd docker_registry_mirrors:
## for containerd. - https://mirror.gcr.io
container_manager: docker
## A string of extra options to pass to the docker daemon.
# Additional container runtimes ## This string should be exactly as you wish it to appear.
kata_containers_enabled: false # docker_options: ""
... ...
3/5 Addons
$ vi inventory/sample/group_vars/k8s-cluster/addons.yml
... # Cert manager deployment
# Kubernetes dashboard cert_manager_enabled: false
# RBAC required. see docs/getting-started.md for access details. # cert_manager_namespace: "cert-manager"
dashboard_enabled: true
# MetalLB deployment
# Metrics Server deployment metallb_enabled: false
metrics_server_enabled: false # metallb_ip_range:
# metrics_server_kubelet_insecure_tls: true # - "10.5.0.50-10.5.0.99"
# metrics_server_metric_resolution: 60s # metallb_version: v0.9.3
# metrics_server_kubelet_preferred_address_types: "InternalIP"
$ vi inventory/sample/group_vars/k8s-cluster/addons.yml
...
# A comma separated list of levels of node allocatable
# enforcement to be enforced by kubelet.
# Acceptable options are 'pods', 'system-reserved',
# 'kube-reserved' and ''. Default is "".
# kubelet_enforce_node_allocatable: pods
$ ansible-playbook -i inventory/cka/hosts.yaml \
-u root cluster.yml
●
https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md
5/5 Verify the Deployment
●
https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md
Adding/replacing a node
● Reset installation
●
https://github.com/kubernetes-sigs/kubespray/blob/master/docs/nodes.md