请选择 进入手机版 | 继续访问电脑版
 找回密码
 立即注册
搜索

k8s 1.18 上开启node local caching dns

[复制链接]
5181 abc 发表于 2021-1-12 18:26:37
  1. To get this set up, I first downloaded the node local DNS deployment manifest:

  2. $ curl -o localnodecache.tml -L0 https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml

  3. The manifest contains a service account, service, daemonset and config map. The container that is spun up on each node runs CoreDNS, but in caching mode. The caching feature is enabled with the following configuration block, which is part of the config map that was installed above:

  4. cache {
  5.         success  9984 30
  6.         denial   9984 5
  7.         prefetch 500  5
  8. }

  9. The cache block tells CoreDNS how many queries to cache, as well as how long to keep them (TTL). You can also configure CoreDNS to prefetch frequently queried items prior to them expiring! Next, we need to replace three PILLAR variables in the manifest:

  10. $ export localdns="169.254.20.10"

  11. $ export domain="cluster.local"

  12. $ export kubedns="10.96.0.10"

  13. $ sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g" nodelocaldns.yaml > nodedns.yml

  14. The localdns variable contains the IP address you want your caching coredns instance to listen for queries on. The documentation uses a link local address, but you can use anything you want. It just can’t overlap with existing IPs. Domain contains the Kubernetes domain you you set “clusterDomain” to. And finally, kubedns is the service IP that sits in front of your primary CoreDNS pods. Once the manifest is applied:

  15. $ kubectl apply -f nodedns.yml

  16. You will see a new daemonset, and one caching DNS pod per host:

  17. $ kubectl get ds -n kube-system node-local-dns

  18. NAME             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
  19. node-local-dns   4         4         4       4            4           <none>          3h43m

  20. $ kubectl get po -o wide -n kube-system -l k8s-app=node-local-dns

  21. NAME                   READY   STATUS    RESTARTS   AGE     IP           NODE                 NOMINATED NODE   READINESS GATES
  22. node-local-dns-24knq   1/1     Running   0          3h40m   172.18.0.4   test-worker          <none>           <none>
  23. node-local-dns-fl2zf   1/1     Running   0          3h40m   172.18.0.3   test-worker2         <none>           <none>
  24. node-local-dns-gvqrv   1/1     Running   0          3h40m   172.18.0.5   test-control-plane   <none>           <none>
  25. node-local-dns-v9hlv   1/1     Running   0          3h40m   172.18.0.2   test-worker3         <none>           <none>

  26. One thing I found interesting is how DNS queries get routed to the caching DNS pods. Given a pod with a ClusterFirst policy, the nameserver value in /etc/resolv.conf will get populated with the service IP that sits in front of your in-cluster CoreDNS pods:

  27. $ kubectl get svc kube-dns -o wide -n kube-system

  28. NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
  29. kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   5d18h   k8s-app=kube-dns

  30. $ kubectl exec -it nginx-f89759699-2c8zw -- cat /etc/resolv.conf

  31. search default.svc.cluster.local svc.cluster.local cluster.local
  32. nameserver 10.96.0.10
  33. options ndots:5

  34. But under the covers, iptables has an OUTPUT chain to route DNS requests destined for your CoreDNS cluster service IP to the IP assigned to the localdns variable. We can view that with the iptables command:

  35. $ iptables -L OUTPUT

  36. Chain OUTPUT (policy ACCEPT)
  37. target     prot opt source               destination
  38. ACCEPT     udp  --  10.96.0.10           anywhere             udp spt:53
  39. ACCEPT     tcp  --  10.96.0.10           anywhere             tcp spt:53
  40. ACCEPT     udp  --  169.254.20.10        anywhere             udp spt:53
  41. ACCEPT     tcp  --  169.254.20.10        anywhere             tcp spt:53
  42. KUBE-SERVICES  all  --  anywhere         anywhere             ctstate NEW /* kubernetes service portals */
  43. KUBE-FIREWALL  all  --  anywhere         anywhere

  44. Pretty neat! Now to test this out. If we exec into a pod:

  45. $ kubectl exec -it nginx-f89759699-59ss2 -- sh

  46. And query the local caching instance:

  47. $ dig +short @169.254.20.10 prefetch.net

  48. 67.205.141.207

  49. $ dig +short @169.254.20.10 prefetch.net

  50. 67.205.141.207

  51. We get the same results. But if you check the logs, the first request will hit the local caching server, and then be forwarded to your primary CoreDNS service IP. When the second query comes in, the cached entry will be returned to the requester, reducing load on your primary CoreDNS servers. And if your pods are configured to point to the upstream CoreDNS servers, iptables will ensure that query hits the local DNS cache. Pretty sweet! And this all happens through the magic of CoreDNS, IPTables and some awesome developers! This feature rocks!
复制代码
https://prefetch.net/blog/2020/05/15/using-node-local-caching-on-your-kubernetes-nodes-to-reduce-coredns-traffic/

回复

使用道具 举报

 楼主| abc 发表于 2021-1-12 18:29:08
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

快速回复 返回顶部 返回列表