Debugging DNS Resolution
Debugging DNS Resolution
Killercoda
Play with Kubernetes
Your cluster must be configured to use the CoreDNS addon or its precursor, kube-dns.
Your Kubernetes server must be at or later than version v1.6. To check the version,
enter kubectl version.
admin/dns/dnsutils.yaml
apiVersion: v1
kind: Pod
metadata:
name: dnsutils
namespace: default
spec:
containers:
- name: dnsutils
image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3
command:
- sleep
- "infinity"
imagePullPolicy: IfNotPresent
restartPolicy: Always
Note: This example creates a pod in the default namespace. DNS name resolution for
services depends on the namespace of the pod. For more information, review DNS for
Services and Pods.
Use that manifest to create a Pod:
Take a look inside the resolv.conf file. (See Customizing DNS Service and Known
issues below for more information)
Use the kubectl get pods command to verify that the DNS pod is running.
Use the kubectl logs command to see logs for the DNS containers.
For CoreDNS:
.:53
2018/08/15 14:37:17 [INFO] CoreDNS-1.2.2
2018/08/15 14:37:17 [INFO] linux/amd64, go1.10.3, 2e322f6
CoreDNS-1.2.2
linux/amd64, go1.10.3, 2e322f6
2018/08/15 14:37:17 [INFO] plugin/reload: Running configuration MD5 =
24e6c59e83ce706f07bcc82c31b1ea1c
See if there are any suspicious or unexpected messages in the logs.
Verify that the DNS service is up by using the kubectl get service command.
You can verify that DNS endpoints are exposed by using the kubectl get
endpoints command.
For additional Kubernetes DNS examples, see the cluster-dns examples in the
Kubernetes GitHub repository.
You can verify if queries are being received by CoreDNS by adding the log plugin to
the CoreDNS configuration (aka Corefile). The CoreDNS Corefile is held in
a ConfigMap named coredns. To edit it, use the command:
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
log
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
After saving the changes, it may take up to minute or two for Kubernetes to
propagate these changes to the CoreDNS pods.
Next, make some queries and view the logs per the sections above in this document.
If CoreDNS pods are receiving the queries, you should see them in the logs.
.:53
2018/08/15 14:37:15 [INFO] CoreDNS-1.2.0
2018/08/15 14:37:15 [INFO] linux/amd64, go1.10.3, 2e322f6
CoreDNS-1.2.0
linux/amd64, go1.10.3, 2e322f6
2018/09/07 15:29:04 [INFO] plugin/reload: Running configuration MD5 =
162475cdf272d8aa601e6fe67a6ad42f
2018/09/07 15:29:04 [INFO] Reloading complete
172.17.0.18:41675 - [07/Sep/2018:15:29:11 +0000] 59925 "A IN
kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd,ra 106
0.000066649s
CoreDNS must be able to list service and endpoint related resources to properly
resolve service names.
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
nodes [] [] [get]
endpoints [] [] [list watch]
namespaces [] [] [list watch]
pods [] [] [list watch]
services [] [] [list watch]
endpointslices.discovery.k8s.io [] [] [list watch]
If any permissions are missing, edit the ClusterRole to add them:
...
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
...
DNS queries that don't specify a namespace are limited to the pod's namespace.
If the namespace of the pod and service differ, the DNS query must include the
namespace of the service.
Known issues
Some Linux distributions (e.g. Ubuntu) use a local DNS resolver by default (systemd-
resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that
can cause a fatal forwarding loop when resolving names in upstream servers. This can
be fixed manually by using kubelet's --resolv-conf flag to point to the
correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf).
kubeadm automatically detects systemd-resolved, and adjusts the kubelet flags
accordingly.
Kubernetes installs do not configure the nodes' resolv.conf files to use the cluster DNS
by default, because that process is inherently distribution-specific. This should
probably be implemented eventually.
Linux's libc (a.k.a. glibc) has a limit for the DNS nameserver records to 3 by default and
Kubernetes needs to consume 1 nameserver record. This means that if a local
installation already uses 3 nameservers, some of those entries will be lost. To work
around this limit, the node can run dnsmasq, which will provide more nameserver entries.
You can also use kubelet's --resolv-conf flag.
If you are using Alpine version 3.3 or earlier as your base image, DNS may not work
properly due to a known issue with Alpine. Kubernetes issue 30215 details more
information on this.