0% found this document useful (0 votes)
15 views18 pages

12 - Kubespray

This document provides a comprehensive guide on installing Kubernetes using Kubespray, which utilizes Ansible playbooks for cluster management. It outlines the requirements for setting up a cluster, including necessary software, inventory configuration, deployment planning, and verification of the installation. Additionally, it covers the process for adding or replacing nodes within the cluster.

Uploaded by

itpasmandsari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views18 pages

12 - Kubespray

This document provides a comprehensive guide on installing Kubernetes using Kubespray, which utilizes Ansible playbooks for cluster management. It outlines the requirements for setting up a cluster, including necessary software, inventory configuration, deployment planning, and verification of the installation. Additionally, it covers the process for adding or replacing nodes within the cluster.

Uploaded by

itpasmandsari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Certified Kubernetes Administrator

(CKA)
Behrad Eslamifar
b.eslamifar@gmail.com
Installing Kubernetes
with Kubespray
Kubespray

● Kubespray is a composition of Ansible playbooks, inventory,


provisioning tools
● A highly available cluster
● Composable attributes
● Support for most popular Linux distributions
○ Ubuntu 16.04, 18.04, 20.04, CentOS/RHEL/Oracle Linux 7, 8,
Debian Buster, Jessie, Stretch, Wheezy, Fedora 31, 32, Fedora
CoreOS, openSUSE Leap 15, Flatcar Container Linux by Kinvolk
● Continuous integration tests


https://kubernetes.io/docs/setup/production-environment/tools/kubespray/

https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md
Creating a Cluster
1/5 Requirement

● Ansible v2.9 and python-netaddr is installed on the machine that will run Ansible
commands
● Jinja 2.11 (or newer) is required to run the Ansible Playbooks
● The target servers must have access to the Internet in order to pull docker
images. Otherwise, additional configuration is required
● The target servers are configured to allow IPv4 forwarding
● Your ssh key must be copied to all the servers part of your inventory
● The firewalls are not managed, you'll need to implement your own rules the
way you used to.
● If kubespray is ran from non-root user account, correct privilege escalation
method should be configured in the target servers. Then the ansible_become
flag or command parameters --become or -b should be specified


https://kubernetes.io/docs/setup/production-environment/tools/kubespray/
2/5 Inventory

$ git clone https://github.com/kubernetes-sigs/kubespray.git $ ssh-copy-id user@192.168.43.202


$ cd kubespray $ ssh-copy-id user@192.168.43.203
$ sudo apt-get install -y python-pip3 $ ssh-copy-id user@192.168.43.204
$ sudo pip3 install -r requirements.txt ...

$ python3 contrib/inventory_builder/inventory.py help


Usage: inventory.py ip1 [ip2 ...]
Examples: inventory.py 10.10.1.3 10.10.1.4 10.10.1.5
...
Configurable env vars:
DEBUG Enable debug printing. Default: True
CONFIG_FILE File to write config to Default: ./inventory/sample/hosts.yaml
HOST_PREFIX Host prefix for generated hosts. Default: node

$ cp -r inventory/sample inventory/anisa
$ CONFIG_FILE=./inventory/cka/hosts.yaml \
python3 contrib/inventory_builder/inventory.py \
master-1,192.168.43.202 worker-1,192.168.43.203 worker-2,192.168.43.204
2/5 Inventory (Review Inventory)

● kube-master $ vi inventory/anisa/hosts.yaml
all:
○ Only master node, 1,3,5,… node hosts:
master-1:
● kube-node ansible_host: 192.168.43.202
ip: 192.168.43.202
access_ip: 192.168.43.202
○ Only worker node worker-1:
● etcd ansible_host: 192.168.43.203
ip: 192.168.43.203
access_ip: 192.168.43.203
○ List master node or seprated hosts children:
kube-master:
for etcd hosts:
master-1:
kube-node:
hosts:
worker-1:
worker-2:
etcd:
hosts:
master-1:
...
3/5 Plan your cluster deployment

● Choice deployment mode: kubeadm or non-kubeadm


● CNI (networking) plugins
● DNS configuration
● Choice of control plane: native/binary or containerized
● Component versions
● Component runtime options
○ Docker
○ Containerd
○ CRI-O
● Choice of Addons
● Reserved Resources


https://kubernetes.io/docs/setup/production-environment/tools/kubespray/
3/5 Choice Deployment Mode
$ vi inventory/sample/group_vars/all/all.yml $ vi inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml
## Experimental kubeadm etcd deployment mode. Available ...
## only for new deployment ## Settings for containerized control plane (kubelet/secrets)
etcd_kubeadm_enabled: false kubelet_deployment_type: host
helm_deployment_type: host
## External LB example config
# loadbalancer_apiserver: # Enable kubeadm experimental control plane
# address: 1.2.3.4 kubeadm_control_plane: false
# port: 1234 kubeadm_certificate_key: "{{ lookup('password',
credentials_dir + '/kubeadm_certificate_key.creds
## Internal loadbalancers for apiservers length=64 chars=hexdigits') | lower }}"
# loadbalancer_apiserver_localhost: true
# valid options are "nginx" or "haproxy" # K8s image pull policy (imagePullPolicy)
# loadbalancer_apiserver_type: nginx k8s_image_pull_policy: IfNotPresent
...
## Local loadbalancer should use this port
## And must be set port 6443
loadbalancer_apiserver_port: 6443

## Set these proxy values in order to update package


## manager and docker daemon to use proxies
# http_proxy: ""
# https_proxy: ""

## Refer to roles/kubespray-defaults/defaults/main.yml
## before modifying no_proxy
# no_proxy: ""
3/5 CNI (Networking) Plugin
$ vi inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml $ vi inventory/sample/group_vars/k8s-cluster/k8s-net-calico.yml
... ...
# Choose network plugin (cilium, calico, contiv, weave or # Choose data store type for calico: "etcd" or
# flannel. Use cni for generic cni plugin) Can also be set # "kdd" (kubernetes datastore)
# to 'cloud', which lets the cloud provider setup appropriate calico_datastore: "kdd"
# routing
kube_network_plugin: calico # IP in IP and VXLAN is mutualy exclusive modes.
# set IP in IP encapsulation mode: "Always",
# Setting multi_networking to true will install Multus: # "CrossSubnet", "Never"
# https://github.com/intel/multus-cni calico_ipip_mode: 'CrossSubnet'
kube_network_plugin_multus: false
# set VXLAN encapsulation mode: "Always",
# Kubernetes internal network for services, unused block # "CrossSubnet", "Never"
# of space. # calico_vxlan_mode: 'Never'
kube_service_addresses: 10.233.0.0/18 ...

# internal network. When used, it will assign IP


# addresses from this range to individual pods.
# This network must be unused in your network infrastructure!
kube_pods_subnet: 10.233.64.0/18

# internal network node size allocation (optional). This is


# the size allocated to each node on your network. With
# these defaults you should have room for 4096 nodes with 254
# pods per node.
kube_network_node_prefix: 24
...
3/5 DNS Configuration

$ vi inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml $ vi inventory/sample/group_vars/all/all.yml
... ...
# Kubernetes cluster name, also will be used as DNS domain ## Upstream dns servers
cluster_name: cluster.local upstream_dns_servers:
# Subdomains of DNS domain to be resolved via - 8.8.8.8
# /etc/resolv.conf for hostnet pods - 8.8.4.4
ndots: 2
# Can be coredns, coredns_dual, manual or none ...
dns_mode: coredns
# Set manual server if using a custom cluster DNS server
# manual_dns_server: 10.x.x.x
# Enable nodelocal dns cache
enable_nodelocaldns: true
nodelocaldns_ip: 169.254.25.10
nodelocaldns_health_port: 9254

# Can be docker_dns, host_resolvconf or none


resolvconf_mode: docker_dns
# Deploy netchecker app to verify DNS resolve as
# an HTTP service
deploy_netchecker: false
...
3/5 Control Plane: native/binary or containerized

$ vi inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml $ vi inventory/sample/group_vars/etcd.yml
... ...
## Settings for containerized control plane (kubelet/secrets) ## Set level of detail for etcd exported metrics,
kubelet_deployment_type: host ## specify 'extensive' to include histogram metrics.
helm_deployment_type: host # etcd_metrics: basic

# Enable kubeadm experimental control plane ## Settings for etcd deployment type (host or docker)
kubeadm_control_plane: false etcd_deployment_type: docker
kubeadm_certificate_key: "{{ lookup('password', ...
credentials_dir + '/kubeadm_certificate_key.creds
length=64 chars=hexdigits') | lower }}"

# K8s image pull policy (imagePullPolicy)


k8s_image_pull_policy: IfNotPresent $ vi inventory/sample/group_vars/all/all.yml
...
... ## Experimental kubeadm etcd deployment mode.
## Available only for new deployment
etcd_kubeadm_enabled: false
...
3/5 Component Versions and
Container Runtime
$ vi inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml $ vi inventory/sample/group_vars/all/docker.yml
... ...
## Change this to use another Kubernetes version, ## Uncomment this if you want to force overlay/overlay2
## e.g. a current beta release ## as docker storage driver
kube_version: v1.18.9 docker_storage_options: -s overlay2

# kubernetes image repo define ## Used to set docker daemon iptables options to true
kube_image_repo: "k8s.gcr.io" docker_iptables_enabled: "false"
...
# Docker log options
# Rotate container stderr/stdout logs at 50m and keep last 5
$ vi inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml docker_log_opts: "--log-opt max-size=50m --log-opt max-file=5"
...
## Container runtime ## Add other registry,example China registry mirror.
## docker for docker, crio for cri-o and containerd docker_registry_mirrors:
## for containerd. - https://mirror.gcr.io
container_manager: docker
## A string of extra options to pass to the docker daemon.
# Additional container runtimes ## This string should be exactly as you wish it to appear.
kata_containers_enabled: false # docker_options: ""
... ...
3/5 Addons
$ vi inventory/sample/group_vars/k8s-cluster/addons.yml
... # Cert manager deployment
# Kubernetes dashboard cert_manager_enabled: false
# RBAC required. see docs/getting-started.md for access details. # cert_manager_namespace: "cert-manager"
dashboard_enabled: true
# MetalLB deployment
# Metrics Server deployment metallb_enabled: false
metrics_server_enabled: false # metallb_ip_range:
# metrics_server_kubelet_insecure_tls: true # - "10.5.0.50-10.5.0.99"
# metrics_server_metric_resolution: 60s # metallb_version: v0.9.3
# metrics_server_kubelet_preferred_address_types: "InternalIP"

# Nginx ingress controller deployment


ingress_nginx_enabled: false
# ingress_nginx_host_network: false
ingress_publish_status_address: ""
# ingress_nginx_nodeselector:
# kubernetes.io/os: "linux"
# ingress_nginx_tolerations:
# - key: "node-role.kubernetes.io/master"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# ingress_nginx_namespace: "ingress-nginx"
# ingress_nginx_insecure_port: 80
# ingress_nginx_secure_port: 443
...
3/5 Reserved Resources

$ vi inventory/sample/group_vars/k8s-cluster/addons.yml
...
# A comma separated list of levels of node allocatable
# enforcement to be enforced by kubelet.
# Acceptable options are 'pods', 'system-reserved',
# 'kube-reserved' and ''. Default is "".
# kubelet_enforce_node_allocatable: pods

## Optionally reserve resources for OS system daemons.


# system_reserved: true
## Uncomment to override default values
# system_memory_reserved: 512M
# system_cpu_reserved: 500m
## Reservation for master hosts
# system_master_memory_reserved: 256M
# system_master_cpu_reserved: 250m
...
4/5 Deploy a Cluster

● Ensure proxy setting for apt/yum


● Revert your previous Docker config for prevent confilict with
Kubespray
● Cluster deployment using ansible-playbook.
● Large deployments (100+ nodes) may require specific adjustments
for best results.
$ ansible-playbook -i inventory/cka/hosts.yaml \
-u user --become --become-user=root cluster.yml

$ ansible-playbook -i inventory/cka/hosts.yaml \
-u root cluster.yml


https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md
5/5 Verify the Deployment

● Kubespray provides a way to verify inter-pod connectivity and DNS


resolve with Netchecker. Netchecker ensures the netchecker-agents
pods can resolve DNS requests and ping each over within the
default namespace.


https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md
Adding/replacing a node

● Limitation: Removal of first $ ansible-playbook -i inventory/anisa/hosts.yaml \


--become --become-user=root scale.yml
kube-master and etcd-master
● Adding/replacing a worker node
● Adding/replacing a master $ ansible-playbook upgrade-cluster.yml --becom \
--becom-user=root -i inventory/anisa/hosts.yaml \
-e kube_version=v1.18.9
node
● Adding an etcd node
● Removing an etcd node $ ansible-playbook -i inventory/anisa/hosts.yaml
--become --become-user=root reset.yml
\

● Reset installation


https://github.com/kubernetes-sigs/kubespray/blob/master/docs/nodes.md

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy