请选择 进入手机版 | 继续访问电脑版
 找回密码
 立即注册
搜索

本文来自

边缘计算专区

边缘计算专区

人已关注

请添加对本版块的简短描述

精选帖子

k8s apiserver --v  日志级别
k8s apiserver --v 日志级别
0阅读|241人阅读
阿里云CDN计费
阿里云CDN计费
0阅读|514人阅读
信用卡空当接龙
信用卡空当接龙
0阅读|536人阅读
HTTP/1.0和HTTP/1.1、HTTP/2请求对比
HTTP/1.0和HTTP/1.1、HTTP/2请求对比
3阅读|931人阅读

kubeadm部署Kubernetes 1.14.0 HA集群

[复制链接]
910 abc 发表于 2019-8-13 18:58:11
概述
  本次部署使用独立的etcd集群, 不与master节点混布. 独立部署etcd集群解耦了master和etcd, 集群风险小健壮性强,单台master或etcd挂了对集群的影响甚小

  部署环境为CentOS 7.6, 默认内核版本为3.10, 缺少ip_vs_fo.ko模块, 将使kube-proxy无法启用ipvs功能, 目前最新稳定版内核为5.0, 同时Kubernetes 1.14也支持, 故此处会讲内核升级到最新稳定版

  部署架构如下, 三个master节点, 三个worker节点, 以及三个节点的etcd集群

<center></center>

部署节点信息

其中负载均衡VIP为172.22.35.30, 所以节点均无swap分区
Host        IP        Hostname        Hardware env        Role
master01        172.22.35.20        master01.s4lm0x.com        4Core 8G        master
master02        172.22.35.21        master02.s4lm0x.com        4Core 8G        master
master03        172.22.35.22        master03.s4lm0x.com        4Core 8G        master
lb01        172.22.35.23        lb01.s4lm0x.com        2Core 4G        LB
lb02        172.22.35.24        lb02.s4lm0x.com        2Core 4G        LB
worker01        172.22.35.25        worker01.s4lm0x.com        2Core 4G        worker
worker02        172.22.35.26        worker02.s4lm0x.com        2Core 4G        worker
etcd01        172.22.35.27        etcd01.s4lm0x.com        4Core 8G        etcd
etcd02        172.22.35.28        etcd02.s4lm0x.com        4Core 8G        etcd
etcd03        172.22.35.29        etcd03.s4lm0x.com        4Core 8G        etcd
node01        172.22.35.180        node01.s4lm0x.com        4Core 8G        worker, chrony
VIP        172.22.35.30        漂在lb01与lb02之上               
为方便操作, 此过程中所有操作皆以root用户进行, 大部分操作在master01节点进行, 需要到每个节点操作的有额外说明
基础环境配置
所有节点关闭不必要的服务
systemctl disable --now firewalld
setenforce 0
sed -i 's@SELINUX=enforcing@SELINUX=disabled@' /etc/selinux/config
systemctl disable --now NetworkManager
systemctl disable --now dnsmasq
ShellCopy
配置yum源

Kubernetes源建议也使用国内的镜像站点提供的
curl https://mirrors.aliyun.com/repo/epel-7.repo -o /etc/yum.repos.d/epel-7.repo
curl https://mirrors.aliyun.com/repo/Centos-7.repo -o /etc/yum.repos.d/Centos-7.repo

cat << EOF | tee /etc/yum.repos.d/Kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

cat << EOF | tee /etc/yum.repos.d/docker-ce.repo
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
EOF

cat << EOF | tee /etc/yum.repos.d/crio.repo
[crio-311-candidate]
name=added from: https://cbs.centos.org/repos/paas7-crio-311-candidate/x86_64/os/
baseurl=https://cbs.centos.org/repos/paas7-crio-311-candidate/x86_64/os/
enabled=1
gpgcheck=0
EOF

for HOST in master01 master02 master03 lb01 lb02 worker01 worker02 etcd01 etcd02 etcd03 node01; do scp /etc/yum.repos.d/*.repo $HOST:/etc/yum.repos.d/; done
ShellCopy
配置时钟服务器, 所有节点安装chrony, node01作为适中服务器
yum install chrony
sed -i 's@server 0.centos.pool.ntp.org iburst@server 172.22.35.180 iburst@' /etc/chrony.conf
sed -ir '/server [0-9].centos.pool.ntp.org iburst/d' /etc/chrony.conf
for HOST in master01 master02 master03 lb01 lb02 worker01 worker02 etcd01 etcd02 etcd03 node01; do scp /etc/chrony.conf $HOST:/etc/; done
sed -i 's@#local stratum 10@local stratum 10@' /etc/chrony.conf
sed -i 's@#allow 192.168.0.0/16@allow 172.22.35.0/24@' /etc/chrony.conf
for HOST in master01 master02 master03 lb01 lb02 worker01 worker02 etcd01 etcd02 etcd03 node01; do ssh $HOST "systemctl restart chronyd"; done
ShellCopy
主机名解析, 主机数量较少, 不再额外配置DNS, 使用hosts文件进行解析
cat << EOF | tee /etc/hosts
172.22.35.20    master01    master01.s4lm0x.com
172.22.35.21    master02    master02.s4lm0x.com
172.22.35.22    master03    master03.s4lm0x.com
172.22.35.23    lb01    lb01.s4lm0x.com
172.22.35.24    lb02    lb02.s4lm0x.com
172.22.35.25    worker01    worker01.s4lm0x.com
172.22.35.26    worker02    worker02.s4lm0x.com
172.22.35.27    etcd01    etcd01.s4lm0x.com
172.22.35.28    etcd02    etcd02.s4lm0x.com
172.22.35.29    etcd03    etcd03.s4lm0x.com
172.22.35.180    node01    node01.s4lm0x.com
EOF
ShellCopy
以下设置仅在master与worker节点
安装相关程序包
yum install jq psmisc socat yum-utils device-mapper-persistent-data lvm2 cri-o ipvsadm ipset sysstat conntrack libseccomp
yum update -y --exclude=kernel*
ShellCopy
升级内核, 并调整开机默认使用的内核
export Kernel_Version=5.0.5-1
wget http://mirror.rc.usf.edu/compute ... 6_64/RPMS/kernel-ml{,-devel}-${Kernel_Version}.el7.elrepo.x86_64.rpm
yum localinstall -y kernel-ml*
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
reboot
ShellCopy
加载ipvs相关模块
cat << EOF | tee /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
    /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
   if [ $? -eq 0 ]; then
        /sbin/modprobe \${kernel_module}
   fi
done
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ShellCopy
设置系统参数
cat << EOF | tee /etc/sysctl.d/k8s.conf
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv4.neigh.default.gc_stale_time = 120
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_announce = 2
net.ipv4.ip_forward = 1
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.netfilter.nf_conntrack_max = 2310720
fs.inotify.max_user_watches = 89100
fs.may_detach_mounts = 1
fs.file-max = 52706963
fs.nr_open = 52706963
net.bridge.bridge-nf-call-arptables = 1
vm.swappiness = 0
vm.overcommit_memory = 1
vm.panic_on_oom=0
net.ipv4.tcp_fastopen = 3
EOF

sysctl --system
ShellCopy
配置etcd
安装cfssl
wget -O /bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget -O /bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget -O /bin/cfssl-certinfo  https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
for cfssl in `ls /bin/cfssl*`;do chmod +x $cfssl;done;
ShellCopy
配置etcd证书, 可在任意一个节点进行证书创建, 分发到三个etcd节点即可
mkdir -pv $HOME/ssl && cd $HOME/ssl

cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF

cat << EOF | tee etcd-ca-csr.json
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ]
}
EOF

cat << EOF | tee etcd-csr.json
{
    "CN": "etcd",
    "hosts": [
      "127.0.0.1",
      "172.22.35.27",
      "172.22.35.28",
      "172.22.35.29"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "etcd",
            "OU": "Etcd Security"
        }
    ]
}
EOF

cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca
cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
ShellCopy
分发证书到三个节点
mkdir -pv etc/{etcd/ssl,kubernetes/pki/etcd}
cp etcd*.pem /etc/etcd/ssl
cp etcd*.pem /etc/kubernetes/pki/etcd

for HOST in etcd01 etcd02 etcd03; do scp -r /etc/etcd $HOST:/etc/; done
ShellCopy
在etcd01, etcd02, etcd03节点安装etcd, 并配置
yum install -y etcd
ShellCopy
配置etcd01
cat << EOF | tee /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.22.35.27:2380"
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://172.22.35.27:2379"
ETCD_NAME="etcd1"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.22.35.27:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://172.22.35.27:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://172.22.35.27:2380,etcd2=https://172.22.35.28:2380,etcd3=https://172.22.35.29:2380"
ETCD_INITIAL_CLUSTER_TOKEN="BigBoss"
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
EOF

chown -R etcd.etcd /etc/etcd
systemctl restart etcd
systemctl enable etcd
ShellCopy
配置etcd02
cat << EOF | tee /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.22.35.28:2380"
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://172.22.35.28:2379"
ETCD_NAME="etcd2"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.22.35.28:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://172.22.35.28:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://172.22.35.27:2380,etcd2=https://172.22.35.28:2380,etcd3=https://172.22.35.29:2380"
ETCD_INITIAL_CLUSTER_TOKEN="BigBoss"
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
EOF

chown -R etcd.etcd /etc/etcd
systemctl restart etcd
systemctl enable etcd
ShellCopy
配置etcd03
cat << EOF | tee /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.22.35.29:2380"
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://172.22.35.29:2379"
ETCD_NAME="etcd3"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.22.35.29:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://172.22.35.29:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://172.22.35.27:2380,etcd2=https://172.22.35.28:2380,etcd3=https://172.22.35.29:2380"
ETCD_INITIAL_CLUSTER_TOKEN="BigBoss"
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
EOF

chown -R etcd.etcd /etc/etcd
systemctl restart etcd
systemctl enable etcd
ShellCopy
配置keepalived和Haproxy
在lb01和lb02节点配置keepalived和haproxy, lb01为MASTER, lb02为BACKUP

yum install -y keepalived haproxy
ShellCopy
配置lb01节点上的keepalived
cat << EOF | tee /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
      root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id kh01
   vrrp_mcast_group4 224.0.100.100
}

vrrp_instance VI_1 {
    state MASTER
    interface ens192
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 0d25bedd3b13081c5ba5
    }
    virtual_ipaddress {
        172.22.35.30/16
    }
}
EOF

systemctl enable keepalived
systemctl start keepalived
ShellCopy
配置lb02节点上的keepalived
cat << EOF | tee /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
      root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id kh02
   vrrp_mcast_group4 224.0.100.100
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens192
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 0d25bedd3b13081c5ba5
    }
    virtual_ipaddress {
        172.22.35.30/16
    }
}
EOF

systemctl enable keepalived
systemctl start keepalived
ShellCopy
配置lb01节点上的haproxy
cat << EOF | tee /etc/haproxy/haproxy.cfg
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

defaults
    mode                    tcp
    log                     global
    retries                 3
    timeout connect         10s
    timeout client          1m
    timeout server          1m

frontend kubernetes
    bind *:6443
    mode tcp
    default_backend kubernetes-master

backend kubernetes-master
    balance roundrobin
    server master  172.22.35.20:6443 check maxconn 2000
    server master2 172.22.35.21:6443 check maxconn 2000
    server master3 172.22.35.22:6443 check maxconn 2000
EOF

systemctl enable haproxy
systemctl start haproxy
ShellCopy
配置lb02节点上的haproxy
cat << EOF | tee /etc/haproxy/haproxy.cfg
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

defaults
    mode                    tcp
    log                     global
    retries                 3
    timeout connect         10s
    timeout client          1m
    timeout server          1m

frontend kubernetes
    bind *:6443
    mode tcp
    default_backend kubernetes-master

backend kubernetes-master
    balance roundrobin
    server master  172.22.35.20:6443 check maxconn 2000
    server master2 172.22.35.21:6443 check maxconn 2000
    server master3 172.22.35.22:6443 check maxconn 2000
EOF

systemctl enable haproxy
systemctl start haproxy
ShellCopy
安装kubeadm, kubelet, kubectl, docker
在master节点安装docker, kubelet, kubeadm, kubectl, 并配置docker

kubectl作为客户端工具, 非必须安装
yum isntall -y docker-ce kubelet kubeadm kubectl

cat << EOF | tee /etc/docker/daemon.json
{
  "registry-mirrors": ["https://2n14cd7b.mirror.aliyuncs.com"],
  "live-restore": true,
  "default-shm-size": "128M",
  "bridge": "none",
  "max-concurrent-downloads": 10,
  "oom-score-adjust": -1000,
  "debug": false,
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

sed -i '/^ExecStart/a \ExecStartPost=/sbin/iptables -P FORWARD ACCEPT' /usr/lib/systemd/system/docker.service
sed -i '/^Restart=always/a \Environment="HTTPS_PROXY=http://127.0.0.1:8118"\nEnvironment="HTTP_PROXY=http://127.0.0.1:8118"\nEnvironment="NO_PROXY=172.29.0.0/16,127.0.0.0/8"' /usr/lib/systemd/system/docker.service

systemctl daemon-reload
systemctl enable docker kubelet
systemctl restart docker
ShellCopy
worker节点安装docker, kubelet, kubeadm, 并配置docker
yum install -y kubelet kubeadm docker-ce
scp master01:/etc/docker/daemon.json /etc/docker/
scp master01:/usr/lib/systemd/system/docker.service /usr/lib/systemd/system/

systemctl daemon-reload
systemctl enable docker kubelet
systemctl restart docker
ShellCopy
部署master节点
在master01上执行初始化
mkdir $HOME/manifests && cd $HOME/manifests
cat << EOF | tee kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.14.0
controlPlaneEndpoint: 172.22.35.30:6443
apiServer:
  certSANs:
  - master01
  - master01
  - master01
  - 172.22.35.20
  - 172.22.35.21
  - 172.22.35.22
  - 172.22.35.23
  - 172.22.35.24
  - 172.22.35.180
  - 172.22.35.30
etcd:
  external:
    endpoints:
    - "https://172.22.35.27:2379"
    - "https://172.22.35.28:2379"
    - "https://172.22.35.29:2379"
    caFile: /etc/kubernetes/pki/etcd/etcd-ca.pem
    certFile: /etc/kubernetes/pki/etcd/etcd.pem
    keyFile: /etc/kubernetes/pki/etcd/etcd-key.pem
networking:
  podSubnet: 10.244.0.0/16
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
imageRepository: k8s.gcr.io
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
EOF

kubeadm init --config kubeadm-init.yaml
ShellCopy
master01节点初始化完成后将看到如下信息
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u)(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/conce ... inistration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 172.22.35.30:6443 --token reg6r7.98apd811ll8duznl \
    --discovery-token-ca-cert-hash sha256:edba87db4979469e002d3afcbe2ef3c39041f5391fc1f32d37a0095e22e8adce \
    --experimental-control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.22.35.30:6443 --token reg6r7.98apd811ll8duznl \
    --discovery-token-ca-cert-hash sha256:edba87db4979469e002d3afcbe2ef3c39041f5391fc1f32d37a0095e22e8adce
ShellCopy
根据提示心信息逐步操作即可,将master加入集群的命令复制并到其余2个master节点执行,将worker节点加入集群的命令复制到所有worker节点执行
master02, master03
kubeadm join 172.22.35.30:6443 --token reg6r7.98apd811ll8duznl \
    --discovery-token-ca-cert-hash sha256:edba87db4979469e002d3afcbe2ef3c39041f5391fc1f32d37a0095e22e8adce \
    --experimental-control-plane
ShellCopy
worker节点节点加入集群
将master01节点初始化完成后所提示的worker节点加入集群命令在各worker节点执行即可
kubeadm join 172.22.35.30:6443 --token reg6r7.98apd811ll8duznl \
    --discovery-token-ca-cert-hash sha256:edba87db4979469e002d3afcbe2ef3c39041f5391fc1f32d37a0095e22e8adce
ShellCopy
配置网络插件
网络插件使用flannel, 并启用DirectRouting
curl -O https://raw.githubusercontent.co ... on/kube-flannel.yml
sed -i '/Backend/a\        "DirectRouting": true,' kube-flannel.yml
kubectl apply -f kube-flannel.yml
ShellCopy
测试
至此,HA kubernetes集群已经部署完成,可进行使用测试,例如部署一个nginx, 并通过在集群外部进行访问
<center>ok</center>

kubectl create deployment nginx --image=nginx:1.14-alpine
kubectl create service nodeport nginx --tcp=80:80
https://www.s4lm0x.com/archives/86.html

回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

快速回复 返回顶部 返回列表