二进制k8s安装部署


环境准备

Centos7 所有操作全部用root使用者进行,高可用一般建议大于等于3台的奇数,我们使用3台master来做高可用
k8s各版本组件下载地址:
https://github.com/kubernetes/kubernetes/tree/v1.14.3
kubernetes:
wget https://storage.googleapis.com/kubernetes-release/release/v1.14.3/kubernetes-node-linux-amd64.tar.gz
wget https://storage.googleapis.com/kubernetes-release/release/v1.14.3/kubernetes-client-linux-amd64.tar.gz
wget https://storage.googleapis.com/kubernetes-release/release/v1.14.3/kubernetes-server-linux-amd64.tar.gz
wget https://storage.googleapis.com/kubernetes-release/release/v1.14.3/kubernetes.tar.gz
etcd:
wget https://github.com/etcd-io/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz
flannel:
wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
cni-plugins:
wget https://github.com/containernetworking/plugins/releases/download/v0.8.1/cni-plugins-linux-amd64-v0.8.1.tgz
docker:
wget https://download.docker.com/linux/static/stable/x86_64/docker-18.09.6.tgz
cfssl:
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
heapster:
wget https://github.com/kubernetes-retired/heapster/archive/v1.5.4.tar.gz

环境说明

master: kube-apiserver,kube-controller-manager,kube-scheduler,flanneId
node: kubelet,kube-proxy,flanneId
Service_CIDR: 10.254.0.0/16 服务网段,部署前路由不可达,部署后集群内部使用IP:Port可达
Cluster_CIDR:172.30.0.0/16 pod网段,部署前路由不可达,部署后路由可达(flanneld 保证)

hostname IP 部署软件
k8s-master1 192.168.1.31 etcd+keepalived+haproxy+master
k8s-master2 192.168.1.32 etcd+keepalived+haproxy+master
k8s-master3 192.168.1.33 etcd+keepalived+haproxy+master
k8s-worker1 192.168.1.35 docker+node
VIP 192.168.1.10 VIP

配置主机环境

/etc/hosts,设置master免密登陆

192.168.1.31 k8s-master1
192.168.1.32 k8s-master2
192.168.1.33 k8s-master3

安装前配置

  • 禁止selinux,防火墙,swap分区

    setenforce 0
    sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
    systemctl stop firewalld
    systemctl disable firewalld
    swapoff -a
  • 安装软件包

    yum -y install ntpdate gcc git vim wget 
  • 时间统一定时更新

    */5 * * * * /usr/sbin/ntpdate ntp.api.bz >/dev/null 2>&1
  • 修改文件句柄数

    cat /etc/security/limits.conf
    * soft nofile 65536
    * hard nofile 65536
    * soft nproc 65536
    * hard nproc 65536
    * soft  memlock  unlimited
    * hard memlock  unlimited
  • ipvs安装
    yum install ipvsadm ipset sysstat conntrack libseccomp -y  
  • 开机加载内核模块,并设置开机自动加载

    cat /etc/sysconfig/modules/ipvs.modules
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack_ipv4
    
    chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
    lsmod | grep -e ip_vs -e nf_conntrack_ipv4
  • 修改系统参数

    cat /etc/sysctl.d/k8s.conf
    net.ipv4.tcp_keepalive_time = 600
    net.ipv4.tcp_keepalive_intvl = 30
    net.ipv4.tcp_keepalive_probes = 10
    net.ipv6.conf.all.disable_ipv6 = 1
    net.ipv6.conf.default.disable_ipv6 = 1
    net.ipv6.conf.lo.disable_ipv6 = 1
    net.ipv4.neigh.default.gc_stale_time = 120
    net.ipv4.conf.all.rp_filter = 0
    net.ipv4.conf.default.rp_filter = 0
    net.ipv4.conf.default.arp_announce = 2
    net.ipv4.conf.lo.arp_announce = 2
    net.ipv4.conf.all.arp_announce = 2
    net.ipv4.ip_forward = 1
    net.ipv4.tcp_max_tw_buckets = 5000
    net.ipv4.tcp_syncookies = 1
    net.ipv4.tcp_max_syn_backlog = 1024
    net.ipv4.tcp_synack_retries = 2
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.netfilter.nf_conntrack_max = 2310720
    fs.inotify.max_user_watches=89100
    fs.may_detach_mounts = 1
    fs.file-max = 52706963
    fs.nr_open = 52706963
    net.bridge.bridge-nf-call-arptables = 1
    vm.swappiness = 0
    vm.overcommit_memory=1
    vm.panic_on_oom=0
    
    sysctl --system
  • 预留内存,避免由于内存耗尽导致ssh连不上主机,比如100M,资源充足建议大点

    echo 'vm.min_free_kbytes=100000' >> /etc/sysctl.conf
    sysctl -p

部署docker

  1. 安装yum源工具包
    yum install -y yum-utils device-mapper-persistent-data lvm2
  2. 下载docker-ce官方的yum源配置文件
    yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  3. 安装docker-ce相应版本
    yum -y install docker-ce.x86_64
  4. 配置daemon, 因为kubelet的启动环境变量要与docker的cgroup-driver驱动相同
    mkdir -p /etc/docker && 
    cat /etc/docker/daemon.json 
    {
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {
     "max-size": "100m"
    },
    "storage-driver": "overlay2",
    "storage-opts": [
     "overlay2.override_kernel_check=true"
    ],
    "registry-mirrors": ["https://uyah70su.mirror.aliyuncs.com"]
    }
  5. 设置开机自启动
    systemctl restart docker && systemctl enable docker && systemctl status docker

部署etcd

etcd是用来保存集群所有状态的 Key/Value 存储系统,常用于服务发现、共享配置以及并发控制(如 leader 选举、分布式锁等)。kubernetes 使用 etcd 存储所有运行数据。

所有 Kubernetes 组件会通过 API Server 来跟 Etcd 进行沟通从而保存或读取资源状态。有条件的可以单独几台机器跑,不过需要配置apiserver指向etcd集群。

安装cfssl

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl*

安装配置etcd

  1. 配置etcd证书

    mkdir /root/ssl && cd /root/ssl
    cat ca-config.json
    {
    "signing": {
    "default": {
     "expiry": "8760h"
    },
    "profiles": {
     "kubernetes": {
       "usages": [
           "signing",
           "key encipherment",
           "server auth",
           "client auth"
       ],
       "expiry": "8760h"
     }
    }
    }
    }
    
    cat ca-csr.json 
    {
    "CN": "kubernetes",
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
     "C": "CN",
     "ST": "ShangHai",
     "L": "ShangHai",
     "O": "k8s",
     "OU": "System"
    }
    ]
    }
    
    cat etcd-csr.json 
    {
     "CN": "etcd",
     "hosts": [
       "127.0.0.1",
       "192.168.1.31",
       "192.168.1.32",
       "192.168.1.33"
     ],
     "key": {
       "algo": "rsa",
       "size": 2048
     },
     "names": [
       {
         "C": "CN",
         "ST": "ShangHai",
         "L": "ShangHai",
         "O": "k8s",
         "OU": "System"
       }
     ]
    }
  2. 创建etcd证书

    cfssl gencert -initca ca-csr.json | cfssljson -bare ca
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

    生产后证书包含文件如下,共9个

    ca-config.json
    ca.csr
    ca-csr.json
    ca-key.pem
    ca.pem
    etcd.csr
    etcd-csr.json
    etcd-key.pem
    etcd.pem

  1. 将生成好的etcd.pem和etcd-key.pem以及ca.pem三个文件拷贝到etcd机器上

    mkdir -p /etc/kubernetes/ssl && cp *.pem /etc/kubernetes/ssl/
    ssh -n 192.168.1.32 "mkdir -p /etc/kubernetes/ssl && exit"
    ssh -n 192.168.1.33 "mkdir -p /etc/kubernetes/ssl && exit"
    
    scp -r /etc/kubernetes/ssl/*.pem 192.168.1.32:/etc/kubernetes/ssl/
    scp -r /etc/kubernetes/ssl/*.pem 192.168.1.33:/etc/kubernetes/ssl/
  2. 配置部署etcd

    wget https://github.com/etcd-io/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz
    tar -zxvf etcd-v3.3.13-linux-amd64.tar.gz
    cp etcd-v3.3.13-linux-amd64/etcd* /usr/local/bin
    scp etcd-v3.3.13-linux-amd64/etcd* 192.168.1.32:/usr/local/bin
    scp etcd-v3.3.13-linux-amd64/etcd* 192.168.1.33:/usr/local/bin
  3. 创建启动配置文件(三台配置文件不同)

  • k8s-master1:

    cat /etc/systemd/system/etcd.service
    [Unit]
    Description=Etcd Server
    After=network.target
    After=network-online.target
    Wants=network-online.target
    Documentation=https://github.com/coreos
    
    [Service]
    Type=notify
    WorkingDirectory=/var/lib/etcd/
    ExecStart=/usr/local/bin/etcd \
      --name k8s-master1 \
      --cert-file=/etc/kubernetes/ssl/etcd.pem \
      --key-file=/etc/kubernetes/ssl/etcd-key.pem \
      --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
      --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \
      --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \
      --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
      --initial-advertise-peer-urls https://192.168.1.31:2380 \
      --listen-peer-urls https://192.168.1.31:2380 \
      --listen-client-urls https://192.168.1.31:2379,http://127.0.0.1:2379 \
      --advertise-client-urls https://192.168.1.31:2379 \
      --initial-cluster-token etcd-cluster-0 \
      --initial-cluster k8s-master1=https://192.168.1.31:2380,k8s-master2=https://192.168.1.32:2380,k8s-master3=https://192.168.1.33:2380 \
      --initial-cluster-state new \
      --data-dir=/var/lib/etcd
    Restart=on-failure
    RestartSec=5
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    
    #启动etcd服务
    mkdir /var/lib/etcd
    systemctl daemon-reload && systemctl enable etcd.service && systemctl start etcd.service && systemctl status etcd
  • k8s-master2:

    cat /etc/systemd/system/etcd.service
    [Unit]
    Description=Etcd Server
    After=network.target
    After=network-online.target
    Wants=network-online.target
    Documentation=https://github.com/coreos
    
    [Service]
    Type=notify
    WorkingDirectory=/var/lib/etcd/
    ExecStart=/usr/local/bin/etcd \
      --name k8s-master2 \
      --cert-file=/etc/kubernetes/ssl/etcd.pem \
      --key-file=/etc/kubernetes/ssl/etcd-key.pem \
      --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
      --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \
      --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \
      --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
      --initial-advertise-peer-urls https://192.168.1.32:2380 \
      --listen-peer-urls https://192.168.1.32:2380 \
      --listen-client-urls https://192.168.1.32:2379,http://127.0.0.1:2379 \
      --advertise-client-urls https://192.168.1.32:2379 \
      --initial-cluster-token etcd-cluster-0 \
      --initial-cluster k8s-master1=https://192.168.1.31:2380,k8s-master2=https://192.168.1.32:2380,k8s-master3=https://192.168.1.33:2380 \
      --initial-cluster-state new \
      --data-dir=/var/lib/etcd
    Restart=on-failure
    RestartSec=5
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    
    #启动etcd服务
    mkdir /var/lib/etcd
    systemctl daemon-reload && systemctl enable etcd.service && systemctl start etcd.service && systemctl status etcd
  • k8s-master3:

    cat /etc/systemd/system/etcd.service
    [Unit]
    Description=Etcd Server
    After=network.target
    After=network-online.target
    Wants=network-online.target
    Documentation=https://github.com/coreos
    
    [Service]
    Type=notify
    WorkingDirectory=/var/lib/etcd/
    ExecStart=/usr/local/bin/etcd \
      --name k8s-master3 \
      --cert-file=/etc/kubernetes/ssl/etcd.pem \
      --key-file=/etc/kubernetes/ssl/etcd-key.pem \
      --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
      --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \
      --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \
      --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
      --initial-advertise-peer-urls https://192.168.1.33:2380 \
      --listen-peer-urls https://192.168.1.33:2380 \
      --listen-client-urls https://192.168.1.33:2379,http://127.0.0.1:2379 \
      --advertise-client-urls https://192.168.1.33:2379 \
      --initial-cluster-token etcd-cluster-0 \
      --initial-cluster k8s-master1=https://192.168.1.31:2380,k8s-master2=https://192.168.1.32:2380,k8s-master3=https://192.168.1.33:2380 \
      --initial-cluster-state new \
      --data-dir=/var/lib/etcd
    Restart=on-failure
    RestartSec=5
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    
    #启动etcd服务
    mkdir /var/lib/etcd
    systemctl daemon-reload && systemctl enable etcd.service && systemctl start etcd.service && systemctl status etcd
  1. 验证集群
    etcdctl --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/etcd.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem cluster-health
    返回如下正常

    member 22a9d61e6821c4d is healthy: got healthy result from https://192.168.1.32:2379
    member 68afffba56612fd is healthy: got healthy result from https://192.168.1.31:2379
    member ff1f72bab5edb59f is healthy: got healthy result from https://192.168.1.33:2379
    cluster is healthy

部署flannel

所有的节点都需要安装flannel,,主要目的是跨主机的docker能够互相通信,也是保障kubernetes集群的网络基础和保障

  1. 生产TLS证书,是让kubectl当做client证书使用,(证书只需要生成一次)

    cd /root/ssl
    cat flanneld-csr.json 
    {
    "CN": "flanneld",
    "hosts": [],
    "key": {
     "algo": "rsa",
     "size": 2048
    },
    "names": [
     {
       "C": "CN",
       "ST": "ShangHai",
       "L": "ShangHai",
       "O": "k8s",
       "OU": "System"
     }
    ]
    }
  2. 生成证书和私钥

    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
    
    #包含以下文件
    flanneld.csr
    flanneld-csr.json
    flanneld-key.pem
    flanneld.pem
    #然后将证书拷贝到所有节点下
    cp flanneld*.pem /etc/kubernetes/ssl
    scp flanneld*.pem 192.168.1.32:/etc/kubernetes/ssl
    scp flanneld*.pem 192.168.1.33:/etc/kubernetes/ssl
  3. 安装flannel

    wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
    tar -zvxf flannel-v0.11.0-linux-amd64.tar.gz
    cp flanneld mk-docker-opts.sh /usr/local/bin
    cp flanneld mk-docker-opts.sh /usr/local/bin
    scp flanneld mk-docker-opts.sh 192.168.1.32:/usr/local/bin
    scp flanneld mk-docker-opts.sh 192.168.1.33:/usr/local/bin
  4. 向etcd写入集群Pod网段信息,在etcd集群中任意一台执行一次即可

    etcdctl \
    --endpoints=https://192.168.1.31:2379,https://192.168.1.32:2379,https://192.168.1.33:2379 \
    --ca-file=/etc/kubernetes/ssl/ca.pem \
    --cert-file=/etc/kubernetes/ssl/flanneld.pem \
    --key-file=/etc/kubernetes/ssl/flanneld-key.pem \
    mk /kubernetes/network/config '{"Network":"172.30.0.0/16", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}'
    #返回结果
    {"Network":"172.30.0.0/16", "SubnetLen": 24, "Backend": {"Type": "vxlan"}
    
    #验证
    #列出键值存储的目录
    etcdctl \
    --ca-file=/etc/kubernetes/ssl/ca.pem \
    --cert-file=/etc/kubernetes/ssl/flanneld.pem \
    --key-file=/etc/kubernetes/ssl/flanneld-key.pem ls -r
    
    #查看键值存储
    etcdctl \
    --ca-file=/etc/kubernetes/ssl/ca.pem \
    --cert-file=/etc/kubernetes/ssl/flanneld.pem \
    --key-file=/etc/kubernetes/ssl/flanneld-key.pem get /kubernetes/network/config
    
    #查看已分配pod的子网列表(暂时没有为docker分配子网,启动flannel可以查看)
    etcdctl \
    --ca-file=/etc/kubernetes/ssl/ca.pem \
    --cert-file=/etc/kubernetes/ssl/flanneld.pem \
    --key-file=/etc/kubernetes/ssl/flanneld-key.pem ls  /kubernetes/network/subnets
  5. 创建flannel.service文件

    cat /etc/systemd/system/flannel.service
    [Unit]
    Description=Flanneld overlay address etcd agent
    After=network.target
    After=network-online.target
    Wants=network-online.target
    After=etcd.service
    Before=docker.service
    
    [Service]
    Type=notify
    ExecStart=/usr/local/bin/flanneld \
     -etcd-cafile=/etc/kubernetes/ssl/ca.pem \
     -etcd-certfile=/etc/kubernetes/ssl/flanneld.pem \
     -etcd-keyfile=/etc/kubernetes/ssl/flanneld-key.pem \
     -etcd-endpoints=https://192.168.1.31:2379,https://192.168.1.32:2379,https://192.168.1.33:2379 \
     -etcd-prefix=/kubernetes/network
    ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    RequiredBy=docker.service
  6. 启动flannel服务

    systemctl daemon-reload && systemctl enable flannel && systemctl start flannel && systemctl status flannel
  7. 验证flannel服务

    cat /run/flannel/docker
    #/run/flannel/docker是flannel分配给docker的子网信息,显示如下
    DOCKER_OPT_BIP="--bip=172.30.10.1/24"
    DOCKER_OPT_IPMASQ="--ip-masq=true"
    DOCKER_OPT_MTU="--mtu=1450"
    DOCKER_NETWORK_OPTIONS=" --bip=172.30.10.1/24 --ip-masq=true --mtu=1450"
    
    ip add | grep flannel 
    4: flannel.1:  mtu 1450 qdisc noqueue state UNKNOWN group default 
       inet 172.30.10.0/32 scope global flannel.1
    
    cat /run/flannel/subnet.env
    FLANNEL_NETWORK=172.30.0.0/16
    FLANNEL_SUBNET=172.30.10.1/24
    FLANNEL_MTU=1450
    FLANNEL_IPMASQ=false
  8. 配置docker支持flannel

    vim /etc/systemd/system/multi-user.target.wants/docker.service
    EnvironmentFile=/run/flannel/docker
    ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock $DOCKER_NETWORK_OPTIONS
  9. 重启docker,然后可以查看到已分配pod的子网列表

    systemctl daemon-reload && systemctl restart docker && systemctl status docker
    
    ip add | grep docker
    3: docker0:  mtu 1500 qdisc noqueue state DOWN group default 
       inet 172.30.10.1/24 brd 172.30.10.255 scope global docker0
  10. 设置CNI插件支持flannel

    wget https://github.com/containernetworking/plugins/releases/download/v0.8.1/cni-plugins-linux-amd64-v0.8.1.tgz
    mkdir /opt/cni
    tar -zxvf cni-plugins-linux-amd64-v0.8.1.tgz -C /opt/cni
    mkdir -p /etc/cni/net.d
    cat /etc/cni/net.d/10-default.conf 
    {
       "name": "flannel",
       "type": "flannel",
       "delegate": {
           "bridge": "docker0",
           "isDefaultGateway": true,
           "mtu": 1400
       }
    }
    
    cp /opt/cni/* /usr/local/bin
    scp /opt/cni/* 192.168.1.32:/usr/local/bin
    scp /opt/cni/* 192.168.1.33:/usr/local/bin
    ssh -n 192.168.1.32 "mkdir -p /etc/cni/net.d && exit"
    ssh -n 192.168.1.33 "mkdir -p /etc/cni/net.d && exit"
    scp /etc/cni/net.d/10-default.conf 192.168.1.32:/etc/cni/net.d/
    scp /etc/cni/net.d/10-default.conf 192.168.1.33:/etc/cni/net.d/

部署keepalived+haproxy

keepalived 提供 kube-apiserver 对外服务的 VIP;haproxy 监听 VIP,后端连接所有 kube-apiserver 实例,提供健康检查和负载均衡功能

本文档复用 master 节点的三台机器,haproxy 监听的端口(8443) 需要与 kube-apiserver 的端口 6443 不同,避免冲突。

keepalived 在运行过程中周期检查本机的 haproxy 进程状态,如果检测到 haproxy 进程异常,则触发重新选主的过程,VIP 将飘移到新选出来的主节点,从而实现 VIP 的高可用。
所有组件(如 kubeclt、apiserver、controller-manager、scheduler 等)都通过 VIP 和 haproxy 监听的 8443 端口访问 kube-apiserver 服务。

部署haproxy

  1. 安装配置haproxy

    yum install -y haproxy
    
    cat /etc/haproxy/haproxy.cfg
    global
       log         127.0.0.1 local2
       chroot      /var/lib/haproxy
       pidfile     /var/run/haproxy.pid
       maxconn     4000
       user        haproxy
       group       haproxy
       daemon
    
    defaults
       mode                    tcp
       log                     global
       retries                 3
       timeout connect         10s
       timeout client          1m
       timeout server          1m
    
    listen  admin_stats
       bind 0.0.0.0:9090
       mode http
       log 127.0.0.1 local0 err
       stats refresh 30s
       stats uri /status
       stats realm welcome login\ Haproxy
       stats auth admin:123456
       stats hide-version
       stats admin if TRUE
    
    frontend kubernetes
       bind *:8443
       mode tcp
       default_backend kubernetes-master
    
    backend kubernetes-master
       balance roundrobin
       server k8s-master1 192.168.1.31:6443 check maxconn 2000
       server k8s-master2 192.168.1.32:6443 check maxconn 2000
       server k8s-master3 192.168.1.33:6443 check maxconn 2000
  2. 启动haproxy

    systemctl enable haproxy && systemctl start haproxy && systemctl status haproxy

部署keepalived

  1. 安装keepalived
    yum install -y keepalived
  2. keepalived配置文件,注意网卡interface未必全部一样,配置VIP为192.168.1.10
  • k8s-master1:

    cat  /etc/keepalived/keepalived.conf
    global_defs {
       router_id LVS_k8s
    }
    
    vrrp_script CheckK8sMaster {
        script "curl -k https://192.168.1.10:8443"
        interval 3
        timeout 9
        fall 2
        rise 2
    }
    
    vrrp_instance VI_1 {
        state MASTER
        interface ens160
        virtual_router_id 100
        priority 100
        advert_int 1
        mcast_src_ip 192.168.1.31
        nopreempt
        authentication {
            auth_type PASS
            auth_pass fana123
        }
        unicast_peer {
            192.168.1.32
            192.168.1.33
        }
        virtual_ipaddress {
            192.168.1.10/24
        }
        track_script {
            CheckK8sMaster
        }
    }
  • k8s-master2:

    cat /etc/keepalived/keepalived.conf
    global_defs {
       router_id LVS_k8s
    }
    
    vrrp_script CheckK8sMaster {
        script "curl -k https://192.168.1.10:8443"
        interval 3
        timeout 9
        fall 2
        rise 2
    }
    
    vrrp_instance VI_1 {
        state BACKUP
        interface ens32
        virtual_router_id 100
        priority 90
        advert_int 1
        mcast_src_ip 192.168.1.32
        nopreempt
        authentication {
            auth_type PASS
            auth_pass fana123
        }
        unicast_peer {
            192.168.1.31
            192.168.1.33
        }
        virtual_ipaddress {
            192.168.1.10/24
        }
        track_script {
            CheckK8sMaster
        }
    }
  • k8s-master3:

    cat  /etc/keepalived/keepalived.conf
    global_defs {
       router_id LVS_k8s
    }
    
    vrrp_script CheckK8sMaster {
        script "curl -k https://192.168.1.10:8443"
        interval 3
        timeout 9
        fall 2
        rise 2
    }
    
    vrrp_instance VI_1 {
        state BACKUP
        interface ens160
        virtual_router_id 100
        priority 80
        advert_int 1
        mcast_src_ip 192.168.1.33
        nopreempt
        authentication {
            auth_type PASS
            auth_pass fana123
        }
        unicast_peer {
            192.168.1.31
            192.168.1.32
        }
        virtual_ipaddress {
            192.168.1.10/24
        }
        track_script {
            CheckK8sMaster
        }
    }
  1. 启动keepalived
    systemctl restart keepalived && systemctl enable keepalived && systemctl status keepalived
  2. 查看三台vip(只有一台为VIP)
    ip addr |grep 1.10
     inet 192.168.1.10/24 scope global secondary ens160

部署master

kube-scheduler,kube-controller-manager 和 kube-apiserver 三者的功能紧密相关;同时kube-scheduler 和 kube-controller-manager 只能有一个进程处于工作状态,如果运行多个,则需要通过选举产生一个 leader;

部署kubectl命令工具

  1. 创建CA证书

    wget https://storage.googleapis.com/kubernetes-release/release/v1.14.3/kubernetes-server-linux-amd64.tar.gz
    tar -zxvf kubernetes-server-linux-amd64.tar.gz
    cd kubernetes/server/bin
    cp kube-apiserver kubeadm kube-controller-manager kubectl kube-scheduler /usr/local/bin
    scp -r kube-apiserver kubeadm kube-controller-manager kubectl kube-scheduler k8s-master2:/usr/local/bin
    scp -r kube-apiserver kubeadm kube-controller-manager kubectl kube-scheduler k8s-master3:/usr/local/bin
  2. 创建CA证书

    cd /root/ssl
    cat admin-csr.json 
    {
    "CN": "admin",
    "hosts": [],
    "key": {
     "algo": "rsa",
     "size": 2048
    },
    "names": [
     {
       "C": "CN",
       "ST": "ShangHai",
       "L": "ShangHai",
       "O": "system:masters",
       "OU": "System"
     }
    ]
    }
  3. 生成证书和私钥

    cfssl gencert -ca=ca.pem \
    -ca-key=ca-key.pem \
    -config=ca-config.json \
    -profile=kubernetes admin-csr.json | cfssljson -bare admin
  4. 创建($HOME)/.kube/config文件

    kubectl config set-cluster kubernetes \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://192.168.1.10:8443 \
    --kubeconfig=kubectl.kubeconfig
  5. 设置客户端认证参数

    kubectl config set-credentials admin \
    --client-certificate=admin.pem \
    --client-key=admin-key.pem \
    --embed-certs=true \
    --kubeconfig=kubectl.kubeconfig
  6. 设置上下文参数

    kubectl config set-context kubernetes \
    --cluster=kubernetes \
    --user=admin \
    --kubeconfig=kubectl.kubeconfig
  7. 设置默认上下文

    kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
  8. 拷贝kubectl.kubeconfig文件

    cp kubectl.kubeconfig ~/.kube/config
    ssh -n 192.168.1.32 "mkdir -p /root/.kube && exit"
    ssh -n 192.168.1.33 "mkdir -p /root/.kube && exit"
    scp kubectl.kubeconfig 192.168.1.32:/root/.kube/config
    scp kubectl.kubeconfig 192.168.1.33:/root/.kube/config
    
    cp admin*.pem /etc/kubernetes/ssl/
    scp admin*.pem 192.168.1.32:/etc/kubernetes/ssl/
    scp admin*.pem 192.168.1.33:/etc/kubernetes/ssl/

部署api-server

  1. 创建CA证书,hosts字段指定授权使用该证书的IP或域名列表,这里列出了VIP/apiserver节点IP/kubernetes服务IP和域名

    cd /root/ssl
    cat kubernetes-csr.json
    {
    "CN": "kubernetes",
    "hosts": [
     "127.0.0.1",
     "192.168.1.31",
     "192.168.1.32",
     "192.168.1.33",
     "192.168.1.10",
     "10.254.0.1",
     "kubernetes",
     "kubernetes.default",
     "kubernetes.default.svc",
     "kubernetes.default.svc.cluster",
     "kubernetes.default.svc.cluster.local"
    ],
    "key": {
     "algo": "rsa",
     "size": 2048
    },
    "names": [
     {
       "C": "CN",
       "ST": "ShangHai",
       "L": "ShangHai",
       "O": "k8s",
       "OU": "System"
     }
    ]
    }
  2. 生成证书和私钥

    cfssl gencert -ca=ca.pem \
    -ca-key=ca-key.pem \
    -config=ca-config.json \
    -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
  3. 将证书拷贝到其他master节点

    cp kubernetes*.pem /etc/kubernetes/ssl/
    scp kubernetes*.pem 192.168.1.32:/etc/kubernetes/ssl/
    scp kubernetes*.pem 192.168.1.33:/etc/kubernetes/ssl/
  4. 创建加密配置文件,创建kube-apiserver使用的客户端令牌文件

    cat encryption-config.yaml
    kind: EncryptionConfig
    apiVersion: v1
    resources:
     - resources:
         - secrets
       providers:
         - aescbc:
             keys:
               - name: key1
                 secret: $(head -c 32 /dev/urandom | base64)
         - identity: {}
    
    cat  bootstrap-token.csv
    $(head -c 32 /dev/urandom | base64),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
  5. 将加密文件拷贝到其他master节点

    cp encryption-config.yaml bootstrap-token.csv /etc/kubernetes/ssl
    scp encryption-config.yaml bootstrap-token.csv 192.168.1.32:/etc/kubernetes/ssl
    scp encryption-config.yaml bootstrap-token.csv 192.168.1.33:/etc/kubernetes/ssl
  6. 创建kube-apiserver.service文件

    cat /etc/systemd/system/kube-apiserver.service
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    
    [Service]
    ExecStart=/usr/local/bin/kube-apiserver \
     --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
     --anonymous-auth=false \
     --experimental-encryption-provider-config=/etc/kubernetes/ssl/encryption-config.yaml \
     --advertise-address=0.0.0.0 \
     --bind-address=0.0.0.0 \
     --insecure-bind-address=127.0.0.1 \
     --secure-port=6443 \
     --insecure-port=0 \
     --authorization-mode=Node,RBAC \
     --runtime-config=api/all \
     --enable-bootstrap-token-auth \
     --service-cluster-ip-range=10.254.0.0/16 \
     --service-node-port-range=30000-32700 \
     --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
     --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
     --client-ca-file=/etc/kubernetes/ssl/ca.pem \
     --kubelet-client-certificate=/etc/kubernetes/ssl/kubernetes.pem \
     --kubelet-client-key=/etc/kubernetes/ssl/kubernetes-key.pem \
     --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
     --etcd-cafile=/etc/kubernetes/ssl/ca.pem \
     --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
     --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
     --etcd-servers=https://192.168.1.31:2379,https://192.168.1.32:2379,https://192.168.1.33:2379 \
     --enable-swagger-ui=true \
     --allow-privileged=true \
     --apiserver-count=3 \
     --audit-log-maxage=30 \
     --audit-log-maxbackup=3 \
     --audit-log-maxsize=100 \
     --audit-log-path=/var/log/kubernetes/kube-apiserver-audit.log \
     --event-ttl=1h \
     --alsologtostderr=true \
     --logtostderr=false \
     --log-dir=/var/log/kubernetes \
     --v=2
    Restart=on-failure
    RestartSec=5
    Type=notify
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    
    mkdir -p /var/log/kubernetes
    ssh -n 192.168.1.32 "mkdir -p /var/log/kubernetes && exit"
    ssh -n 192.168.1.33 "mkdir -p /var/log/kubernetes && exit"
    scp /etc/systemd/system/kube-apiserver.service 192.168.1.32:/etc/systemd/system/
    scp /etc/systemd/system/kube-apiserver.service 192.168.1.33:/etc/systemd/system/
    # --bind-address --insecure-bind-address 填固定IPv4地址,不然启动为ipv6,controller-manager总是报错
  7. 启动服务

    systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver && systemctl status kube-apiserver
  8. 授予kubernetes证书访问kubelet api权限。在执行kubectl exec、run、logs 等命令时,apiserver会转发到kubelet。这里定义 RBAC规则,授权apiserver调用kubelet API。

    kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
    #预定义的ClusterRole system:kubelet-api-admin授予访问kubelet所有 API 的权限
    kubectl describe clusterrole system:kubelet-api-admin
  9. 检查api-server和集群状态

    netstat -tnlp|grep 6443
    tcp6       0      0 :::6443                 :::*                    LISTEN      23462/kube-apiserve 
    
    kubectl cluster-info
    Kubernetes master is running at https://192.168.1.10:8443
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    
    kubectl get all --all-namespaces
    NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    default     service/kubernetes   ClusterIP   10.254.0.1           443/TCP   6m44s
    
    kubectl get componentstatuses
    NAME                 STATUS      MESSAGE                                                                                     ERROR
    scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused   
    controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused   
    etcd-0               Healthy     {"health":"true"}                                                                           
    etcd-1               Healthy     {"health":"true"}                                                                           
    etcd-2               Healthy     {"health":"true"}

部署kube-controller-manager

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

  1. 创建CA证书

    cd /root/ssl
    cat kube-controller-manager-csr.json 
    {
     "CN": "system:kube-controller-manager",
     "key": {
         "algo": "rsa",
         "size": 2048
     },
     "hosts": [
       "127.0.0.1",
       "192.168.1.31",
       "192.168.1.32",
       "192.168.1.33"
     ],
     "names": [
       {
         "C": "CN",
         "ST": "ShangHai",
         "L": "ShangHai",
         "O": "system:kube-controller-manager",
         "OU": "System"
       }
     ]
    }
  2. 生成证书

    cfssl gencert -ca=ca.pem \
    -ca-key=ca-key.pem \
    -config=ca-config.json \
    -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
  3. 将证书拷贝到其他master节点

    cp kube-controller-manager*.pem /etc/kubernetes/ssl/
    scp kube-controller-manager*.pem 192.168.1.32:/etc/kubernetes/ssl/
    scp kube-controller-manager*.pem 192.168.1.33:/etc/kubernetes/ssl/
  4. 创建kubeconfig文件

    kubectl config set-cluster kubernetes \
     --certificate-authority=ca.pem \
     --embed-certs=true \
     --server=https://192.168.1.10:8443 \
     --kubeconfig=kube-controller-manager.kubeconfig
    
    kubectl config set-credentials system:kube-controller-manager \
     --client-certificate=kube-controller-manager.pem \
     --client-key=kube-controller-manager-key.pem \
     --embed-certs=true \
     --kubeconfig=kube-controller-manager.kubeconfig
    
    kubectl config set-context system:kube-controller-manager \
     --cluster=kubernetes \
     --user=system:kube-controller-manager \
     --kubeconfig=kube-controller-manager.kubeconfig
    
    kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
  5. 拷贝kube-controller-manager.kubeconfig到其他master节点

    cp kube-controller-manager.kubeconfig /etc/kubernetes/ssl/
    scp kube-controller-manager.kubeconfig 192.168.1.32:/etc/kubernetes/ssl/
    scp kube-controller-manager.kubeconfig 192.168.1.33:/etc/kubernetes/ssl/
  6. 创建kube-controller-manager.service文件

    cat /etc/systemd/system/kube-controller-manager.service 
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    
    [Service]
    ExecStart=/usr/local/bin/kube-controller-manager \
     --address=127.0.0.1 \
     --master=https://192.168.1.10:8443 \
     --kubeconfig=/etc/kubernetes/ssl/kube-controller-manager.kubeconfig \
     --allocate-node-cidrs=true \
     --authentication-kubeconfig=/etc/kubernetes/ssl/kube-controller-manager.kubeconfig \
     --service-cluster-ip-range=10.254.0.0/16 \
     --cluster-cidr=172.30.0.0/16 \
     --cluster-name=kubernetes \
     --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
     --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
     --experimental-cluster-signing-duration=8760h \
     --leader-elect=true \
     --feature-gates=RotateKubeletServerCertificate=true \
     --controllers=*,bootstrapsigner,tokencleaner \
     --horizontal-pod-autoscaler-use-rest-clients=true \
     --horizontal-pod-autoscaler-sync-period=10s \
     --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
     --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
     --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
     --root-ca-file=/etc/kubernetes/ssl/ca.pem \
     --use-service-account-credentials=true \
     --alsologtostderr=true \
     --logtostderr=false \
     --log-dir=/var/log/kubernetes \
     --v=2
    Restart=on
    Restart=on-failure
    RestartSec=5
    
    [Install]
    WantedBy=multi-user.target
  7. 拷贝到其他master节点,然后启动服务

    scp /etc/systemd/system/kube-controller-manager.service 192.168.1.32:/etc/systemd/system/
    scp /etc/systemd/system/kube-controller-manager.service 192.168.1.33:/etc/systemd/system/
    systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager && systemctl status kube-controller-manager
  8. 检查服务

    netstat -tnlp|grep kube-controll
    tcp        0      0 127.0.0.1:10252         0.0.0.0:*               LISTEN      24125/kube-controll 
    tcp6       0      0 :::10257                :::*                    LISTEN      24125/kube-controll
    kubectl get cs
    NAME                 STATUS      MESSAGE                                                                                     ERROR
    scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused   
    controller-manager   Healthy     ok                                                                                          
    etcd-0               Healthy     {"health":"true"}                                                                           
    etcd-1               Healthy     {"health":"true"}                                                                           
    etcd-2               Healthy     {"health":"true"}
    
    检查leader所在机器,如下k8s-master1选为leader
    kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml
    apiVersion: v1
    kind: Endpoints
    metadata:
     annotations:
       control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master1_836ddc44-c586-11e9-94e1-000c29178d85","leaseDurationSeconds":15,"acquireTime":"2019-08-23T09:15:13Z","renewTime":"2019-08-23T09:16:49Z","leaderTransitions":0}'
     creationTimestamp: "2019-08-23T09:15:13Z"
     name: kube-controller-manager
     namespace: kube-system
     resourceVersion: "654"
     selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
     uid: 8371cd23-c586-11e9-b643-000c29327412

部署kube-scheduler

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性

  1. 创建CA证书

    cd /root/ssl
    cat kube-scheduler-csr.json 
    {
     "CN": "system:kube-scheduler",
     "hosts": [
       "127.0.0.1",
       "192.168.1.31",
       "192.168.1.32",
       "192.168.1.33"
     ],
     "key": {
         "algo": "rsa",
         "size": 2048
     },
     "names": [
       {
         "C": "CN",
         "ST": "ShangHai",
         "L": "ShangHai",
         "O": "system:kube-scheduler",
         "OU": "System"
       }
     ]
    }
  2. 生成证书

    cfssl gencert -ca=ca.pem \
    -ca-key=ca-key.pem \
    -config=ca-config.json \
    -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
  3. 创建kube-scheduler.kubeconfig文件

    kubectl config set-cluster kubernetes \
     --certificate-authority=ca.pem \
     --embed-certs=true \
     --server=https://192.168.1.10:8443 \
     --kubeconfig=kube-scheduler.kubeconfig
    
    kubectl config set-credentials system:kube-scheduler \
     --client-certificate=kube-scheduler.pem \
     --client-key=kube-scheduler-key.pem \
     --embed-certs=true \
     --kubeconfig=kube-scheduler.kubeconfig
    
    kubectl config set-context system:kube-scheduler \
     --cluster=kubernetes \
     --user=system:kube-scheduler \
     --kubeconfig=kube-scheduler.kubeconfig
    
    kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
  4. 拷贝kubeconfig到其他master节点

    cp kube-scheduler.kubeconfig kube-scheduler*.pem /etc/kubernetes/ssl/
    scp kube-scheduler.kubeconfig kube-scheduler*.pem 192.168.1.32:/etc/kubernetes/ssl/
    scp kube-scheduler.kubeconfig kube-scheduler*.pem 192.168.1.33:/etc/kubernetes/ssl/
  5. 创建kube-scheduler.service文件

    cat /etc/systemd/system/kube-scheduler.service 
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    
    [Service]
    ExecStart=/usr/local/bin/kube-scheduler \
     --address=127.0.0.1 \
     --master=https://192.168.1.10:8443 \
     --kubeconfig=/etc/kubernetes/ssl/kube-scheduler.kubeconfig \
     --leader-elect=true \
     --alsologtostderr=true \
     --logtostderr=false \
     --log-dir=/var/log/kubernetes \
     --v=2
    Restart=on-failure
    RestartSec=5
    
    [Install]
    WantedBy=multi-user.target
    EOF
  6. 将kube-scheduler.service拷贝到其他master节点,然后启动服务

    scp /etc/systemd/system/kube-scheduler.service 192.168.1.32:/etc/systemd/system
    scp /etc/systemd/system/kube-scheduler.service 192.168.1.33:/etc/systemd/system
    systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler && systemctl status kube-scheduler
  7. 检查服务

    netstat -lnpt|grep kube-sched
    tcp        0      0 127.0.0.1:10251         0.0.0.0:*               LISTEN      24760/kube-schedule 
    tcp6       0      0 :::10259                :::*                    LISTEN      24760/kube-schedule 
    
    kubectl get cs
    NAME                 STATUS    MESSAGE             ERROR
    controller-manager   Healthy   ok                  
    scheduler            Healthy   ok                  
    etcd-2               Healthy   {"health":"true"}   
    etcd-0               Healthy   {"health":"true"}   
    etcd-1               Healthy   {"health":"true"} 
    
    kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml
    apiVersion: v1
    kind: Endpoints
    metadata:
     annotations:
       control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master3_74560974-c588-11e9-b994-000c297ea248","leaseDurationSeconds":15,"acquireTime":"2019-08-23T09:29:07Z","renewTime":"2019-08-23T09:30:38Z","leaderTransitions":0}'
     creationTimestamp: "2019-08-23T09:29:07Z"
     name: kube-scheduler
     namespace: kube-system
     resourceVersion: "1365"
     selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
     uid: 74ec81d5-c588-11e9-b643-000c29327412

在所有master节点上查看功能是否正常

kubectl get componentstatuses
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}

部署node

node节点运行 docker flannel kubelet kube-proxy

先配置Centos环境,完成安装前配置和Docker,flannel的安装

  1. 安装flanneld
    将master上的文件cp到worker节点,并且安装启动flanneld
    ssh -n 192.168.1.35 "mkdir -p /etc/kubernetes/ssl && exit"
    scp ca.pem 192.168.1.35:/etc/kubernetes/ssl
    scp flanneld*.pem 192.168.1.35:/etc/kubernetes/ssl
    之后完成部署flannel中的安装flannel

部署kubelet

kubelet运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。kubelet 启动时自动向 kube-apiserver注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。

  1. 下载解压包,拷贝命令(worker节点)
    wget https://storage.googleapis.com/kubernetes-release/release/v1.14.3/kubernetes-node-linux-amd64.tar.gz
    tar -zxvf kubernetes-node-linux-amd64.tar.gz
    cp kubectl kubelet kube-proxy /usr/local/bin
  2. 创建kubelet-bootstrap.kubeconfig文件,要创建3次分别是(k8s-master1,k8s-master2,k8s-master3),都在master1上执行
  • k8s-master1:

    #创建token
    cd /root/ssl
    export BOOTSTRAP_TOKEN=$(kubeadm token create \
      --description kubelet-bootstrap-token \
      --groups system:bootstrappers:k8s-master1 \
      --kubeconfig ~/.kube/config)
    
    #设置集群参数
    kubectl config set-cluster kubernetes \
      --certificate-authority=ca.pem \
      --embed-certs=true \
      --server=https://192.168.1.10:8443 \
      --kubeconfig=kubelet-bootstrap-k8s-master1.kubeconfig
    
    #设置客户端认证参数
    kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=kubelet-bootstrap-k8s-master1.kubeconfig
    
    #设置上下文参数
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=kubelet-bootstrap-k8s-master1.kubeconfig
    
    #设置默认上下文
    kubectl config use-context default --kubeconfig=kubelet-bootstrap-k8s-master1.kubeconfig
  • k8s-master2:

    #创建token
    cd /root/ssl
    export BOOTSTRAP_TOKEN=$(kubeadm token create \
      --description kubelet-bootstrap-token \
      --groups system:bootstrappers:k8s-master2 \
      --kubeconfig ~/.kube/config)
    
    #设置集群参数
    kubectl config set-cluster kubernetes \
      --certificate-authority=ca.pem \
      --embed-certs=true \
      --server=https://192.168.1.10:8443 \
      --kubeconfig=kubelet-bootstrap-k8s-master2.kubeconfig
    
    #设置客户端认证参数
    kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=kubelet-bootstrap-k8s-master2.kubeconfig
    
    #设置上下文参数
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=kubelet-bootstrap-k8s-master2.kubeconfig
    
    #设置默认上下文
    kubectl config use-context default --kubeconfig=kubelet-bootstrap-k8s-master2.kubeconfig
  • k8s-master3:

    #创建token
    cd /root/ssl
    export BOOTSTRAP_TOKEN=$(kubeadm token create \
      --description kubelet-bootstrap-token \
      --groups system:bootstrappers:k8s-master3 \
      --kubeconfig ~/.kube/config)
    
    #设置集群参数
    kubectl config set-cluster kubernetes \
      --certificate-authority=ca.pem \
      --embed-certs=true \
      --server=https://192.168.1.10:8443 \
      --kubeconfig=kubelet-bootstrap-k8s-master3.kubeconfig
    
    #设置客户端认证参数
    kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=kubelet-bootstrap-k8s-master3.kubeconfig
    
    #设置上下文参数
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=kubelet-bootstrap-k8s-master3.kubeconfig
    
    #设置默认上下文
    kubectl config use-context default --kubeconfig=kubelet-bootstrap-k8s-master3.kubeconfig
  1. 查看kubeadm为各节点创建的token(各master节点都可以查看)

    kubeadm token list --kubeconfig ~/.kube/config
    #显示如下
    TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION               EXTRA GROUPS
    3exm03.8530h7t1j1v1sfl5   22h       2019-08-28T18:00:05+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-master1
    6p1ewd.om5m45f26imnd7an   22h       2019-08-28T18:00:05+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-master2
    l9yyz0.6g13y8ffsdab9lo5   22h       2019-08-28T18:00:05+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-master3
    
    # 如果需要删除创建的token
    kubeadm token --kubeconfig ~/.kube/config delete l9yyz0.6g13y8ffsdab9lo5
    # 查看各token关联的secret
    kubectl get secrets  -n kube-system
  2. 拷贝bootstrap kubeconfig文件到各个node机器上

    ssh -n 192.168.1.35 "mkdir -p /etc/kubernetes/ssl && exit"
    scp kubelet-bootstrap-k8s-master1.kubeconfig 192.168.1.35:/etc/kubernetes/ssl/kubelet-bootstrap.kubeconfig
    scp kubelet-bootstrap-k8s-master2.kubeconfig 192.168.1.35:/etc/kubernetes/ssl/kubelet-bootstrap.kubeconfig
    scp kubelet-bootstrap-k8s-master3.kubeconfig 192.168.1.35:/etc/kubernetes/ssl/kubelet-bootstrap.kubeconfig
  3. 创建kubelet配置文件

    cd /root/ssl
    cat kubelet.config.json 
    {
    "kind": "KubeletConfiguration",
    "apiVersion": "kubelet.config.k8s.io/v1beta1",
    "authentication": {
     "x509": {
       "clientCAFile": "/etc/kubernetes/ssl/ca.pem"
     },
     "webhook": {
       "enabled": true,
       "cacheTTL": "2m0s"
     },
     "anonymous": {
       "enabled": false
     }
    },
    "authorization": {
     "mode": "Webhook",
     "webhook": {
       "cacheAuthorizedTTL": "5m0s",
       "cacheUnauthorizedTTL": "30s"
     }
    },
    "address": "192.168.1.35",
    "port": 10250,
    "readOnlyPort": 0,
    "cgroupDriver": "cgroupfs",
    "hairpinMode": "promiscuous-bridge",
    "serializeImagePulls": false,
    "featureGates": {
     "RotateKubeletClientCertificate": true,
     "RotateKubeletServerCertificate": true
    },
    "clusterDomain": "cluster.local",
    "clusterDNS": ["10.254.0.2"]
    }
  4. 拷贝到其他主机,注意,可以修改address为本机IP地址

    cp kubelet.config.json /etc/kubernetes/ssl
    scp kubelet.config.json 192.168.1.35:/etc/kubernetes/ssl
  5. 创建kubelet.service文件(worker节点)

    mkdir -p /var/log/kubernetes && mkdir -p /var/lib/kubelet
    cat  /etc/systemd/system/kubelet.service 
    [Unit]
    Description=Kubernetes Kubelet
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=docker.service
    Requires=docker.service
    
    [Service]
    WorkingDirectory=/var/lib/kubelet
    ExecStart=/usr/local/bin/kubelet \
     --bootstrap-kubeconfig=/etc/kubernetes/ssl/kubelet-bootstrap.kubeconfig \
     --cert-dir=/etc/kubernetes/ssl \
     --network-plugin=cni \
     --cni-conf-dir=/etc/cni/net.d \
     --cni-bin-dir=/usr/local/bin/ \
     --fail-swap-on=false \
     --kubeconfig=/etc/kubernetes/ssl/kubelet-bootstrap.kubeconfig \
     --config=/etc/kubernetes/ssl/kubelet.config.json \
     --hostname-override=192.168.1.35 \
     --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 \
     --allow-privileged=true \
     --alsologtostderr=true \
     --logtostderr=false \
     --cgroup-driver=systemd \
     --log-dir=/var/log/kubernetes \
     --v=2
    Restart=on-failure
    RestartSec=5
    
    [Install]
    WantedBy=multi-user.target

Bootstrap Token Auth 和授予权限 ,需要先将bootstrap-token文件中的kubelet-bootstrap用户赋予system:node-bootstrapper角色,然后kubelet才有权限创建认证请求

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
  1. 启动kubele服务

    systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet && systemctl status kubelet
  2. 检查服务

    netstat -lantp|grep kubelet
    # 通过kubelet 的TLS 证书请求,kubelet 首次启动时向kube-apiserver 发送证书签名请求,必须通过后kubernetes 系统才会将该 Node 加入到集群。查看未授权的CSR 请求
    kubectl get csr
    NAME                                                   AGE   REQUESTOR                 CONDITION
    node-csr-F0yftUyMpWGyDFRPUoGfF5XgbtPFEfyakLidUu9GY6c   99m   system:bootstrap:balnwx   Pending
  3. approve kubelet csr请求(手动和自动选其一)

  4. 手动approve csr请求(推荐自动的方式)

    kubectl certificate approve node-csr-YNCI2r5QgwPTj4JR7X0VswSR0klbgG2rZ6R7rb_NIcs
    #显示
    certificatesigningrequest.certificates.k8s.io/node-csr-YNCI2r5QgwPTj4JR7X0VswSR0klbgG2rZ6R7rb_NIcs approved
    #查看结果
    kubectl describe csr node-csr-YNCI2r5QgwPTj4JR7X0VswSR0klbgG2rZ6R7rb_NIcs
    Name:               node-csr-YNCI2r5QgwPTj4JR7X0VswSR0klbgG2rZ6R7rb_NIcs
    Labels:             
    Annotations:        
    CreationTimestamp:  Tue, 27 Aug 2019 17:23:29 +0800
    Requesting User:    system:bootstrap:balnwx
    Status:             Approved,Issued
    Subject:
           Common Name:    system:node:192.168.1.35
           Serial Number:  
           Organization:   system:nodes
    Events:  
  5. 自动approve csr请求方式

    #创建ClusterRoleBinding,分别用于自动 approve client、renew client、renew server 证书
    cd /root/ssl
    cat csr-crb.yaml 
    # Approve all CSRs for the group "system:bootstrappers"
     kind: ClusterRoleBinding
     apiVersion: rbac.authorization.k8s.io/v1
     metadata:
       name: auto-approve-csrs-for-group
     subjects:
     - kind: Group
       name: system:bootstrappers
       apiGroup: rbac.authorization.k8s.io
     roleRef:
       kind: ClusterRole
       name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
       apiGroup: rbac.authorization.k8s.io
    ---
     # To let a node of the group "system:bootstrappers" renew its own credentials
     kind: ClusterRoleBinding
     apiVersion: rbac.authorization.k8s.io/v1
     metadata:
       name: node-client-cert-renewal
     subjects:
     - kind: Group
       name: system:bootstrappers
       apiGroup: rbac.authorization.k8s.io
     roleRef:
       kind: ClusterRole
       name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
       apiGroup: rbac.authorization.k8s.io
    ---
    # A ClusterRole which instructs the CSR approver to approve a node requesting a
    # serving cert matching its client cert.
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: approve-node-server-renewal-csr
    rules:
    - apiGroups: ["certificates.k8s.io"]
      resources: ["certificatesigningrequests/selfnodeserver"]
      verbs: ["create"]
    ---
     # To let a node of the group "system:nodes" renew its own server credentials
     kind: ClusterRoleBinding
     apiVersion: rbac.authorization.k8s.io/v1
     metadata:
       name: node-server-cert-renewal
     subjects:
     - kind: Group
       name: system:nodes
       apiGroup: rbac.authorization.k8s.io
     roleRef:
       kind: ClusterRole
       name: approve-node-server-renewal-csr
       apiGroup: rbac.authorization.k8s.io
    
    #拷贝到其他master节点上
    cp csr-crb.yaml /etc/kubernetes/ssl
    scp csr-crb.yaml 192.168.1.32:/etc/kubernetes/ssl
    scp csr-crb.yaml 192.168.1.33:/etc/kubernetes/ssl
    #生效配置
    kubectl apply -f /etc/kubernetes/ssl/csr-crb.yaml
  6. 查看

    kubectl get --all-namespaces -o wide nodes
    NAME           STATUS   ROLES    AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
    192.168.1.35   Ready       7m    v1.14.3   192.168.1.35           CentOS Linux 7 (Core)   3.10.0-957.27.2.el7.x86_64   docker://19.3.1

部署kube-proxy

kube-proxy 运行在所有 worker 节点上,,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。

  1. 创建kube-proxy证书

    cd /root/ssl
    cat kube-proxy-csr.json 
    {
    "CN": "system:kube-proxy",
    "key": {
     "algo": "rsa",
     "size": 2048
    },
    "names": [
     {
       "C": "CN",
       "ST": "ShangHai",
       "L": "ShangHai",
       "O": "k8s",
       "OU": "System"
     }
    ]
    }
  2. 生成证书和私钥

    cfssl gencert -ca=ca.pem \
    -ca-key=ca-key.pem \
    -config=ca-config.json \
    -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
  3. 创建kubeconfig文件

    #设置集群参数
    kubectl config set-cluster kubernetes \
     --certificate-authority=ca.pem \
     --embed-certs=true \
     --server=https://192.168.1.10:8443 \
     --kubeconfig=kube-proxy.kubeconfig
    #设置客户端认证参数
    kubectl config set-credentials kube-proxy \
     --client-certificate=kube-proxy.pem \
     --client-key=kube-proxy-key.pem \
     --embed-certs=true \
     --kubeconfig=kube-proxy.kubeconfig
    #设置上下文参数
    kubectl config set-context default \
     --cluster=kubernetes \
     --user=kube-proxy \
     --kubeconfig=kube-proxy.kubeconfig
    #设置默认上下文
    kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  4. 拷贝到worker节点

    scp kube-proxy*.pem kube-proxy.kubeconfig 192.168.1.35:/etc/kubernetes/ssl/
  5. 创建kube-proxy配置文件

    cd /root/ssl
    catkube-proxy.config.yaml 
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 192.168.1.35
    clientConnection:
    kubeconfig: /etc/kubernetes/ssl/kube-proxy.kubeconfig
    clusterCIDR: 172.30.0.0/16
    healthzBindAddress: 192.168.1.35:10256
    hostnameOverride: 192.168.1.35
    kind: KubeProxyConfiguration
    metricsBindAddress: 192.168.1.35:10249
    mode: "ipvs"
  6. 拷贝到其他节点

    scp kube-proxy.config.yaml 192.168.1.35:/etc/kubernetes/ssl/
  7. 创建kube-proxy.service文件(worker节点)

    cat /etc/systemd/system/kube-proxy.service
    [Unit]
    Description=Kubernetes Kube-Proxy Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    
    [Service]
    WorkingDirectory=/var/lib/kube-proxy
    ExecStart=/usr/local/bin/kube-proxy \
     --config=/etc/kubernetes/ssl/kube-proxy.config.yaml \
     --alsologtostderr=true \
     --logtostderr=false \
     --log-dir=/var/log/kubernetes/kube-proxy \
     --v=2
    Restart=on-failure
    RestartSec=5
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
  8. 启动kube-proxy服务(worker节点)

    mkdir -p /var/lib/kube-proxy && mkdir -p /var/log/kubernetes/kube-proxy
    systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy && systemctl status kube-proxy
  9. 检查

    netstat -lnpt|grep kube-proxy
    tcp        0      0 192.168.1.35:10249      0.0.0.0:*               LISTEN      1031/kube-proxy     
    tcp        0      0 192.168.1.35:10256      0.0.0.0:*               LISTEN      1031/kube-proxy  
    
    ipvsadm -ln
    #显示如下
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
     -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  10.254.0.1:443 rr
     -> 192.168.1.31:6443            Masq    1      0          0         
     -> 192.168.1.32:6443            Masq    1      0          0         
     -> 192.168.1.33:6443            Masq    1      0          0         
    TCP  10.254.189.67:80 rr
     -> 172.30.87.3:80               Masq    1      0          0
  10. 测试集群可用性

    #创建一个pod
    kubectl run nginx --image=nginx
    #查看pod状态
    kubectl get pod -o wide
    NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE           NOMINATED NODE   READINESS GATES
    nginx-7db9fccd9b-glrx5   1/1     Running   0          27m   172.30.87.3   192.168.1.35              
    #测试IP是否ping通
    ping -c4 172.30.87.3
    PING 172.30.87.3 (172.30.87.3) 56(84) bytes of data.
    64 bytes from 172.30.87.3: icmp_seq=1 ttl=63 time=0.372 ms
    64 bytes from 172.30.87.3: icmp_seq=2 ttl=63 time=0.188 ms
    64 bytes from 172.30.87.3: icmp_seq=3 ttl=63 time=0.160 ms
    64 bytes from 172.30.87.3: icmp_seq=4 ttl=63 time=0.169 ms
    
    --- 172.30.87.3 ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 2999ms
    rtt min/avg/max/mdev = 0.160/0.222/0.372/0.087 ms
    #创建服务
    kubectl expose deployment nginx --name=nginx --port=80 --target-port=80 --type=NodePort
    service/nginx exposed
    #查看服务
    kubectl get svc -o wide
    NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE    SELECTOR
    kubernetes   ClusterIP   10.254.0.1              443/TCP        2d1h   
    nginx        NodePort    10.254.215.97           80:31401/TCP   31s    run=nginx
    #访问curl访问node_ip:nodeport
    curl -I 192.168.1.35:31401
    HTTP/1.1 200 OK
    Server: nginx/1.17.3
    Date: Wed, 28 Aug 2019 10:06:40 GMT
    Content-Type: text/html
    Content-Length: 612
    Last-Modified: Tue, 13 Aug 2019 08:50:00 GMT
    Connection: keep-alive
    ETag: "5d5279b8-264"
    Accept-Ranges: bytes
    #在flannel worker主机上访问集群IP
    ip add | grep 10.254
       inet 10.254.0.1/32 brd 10.254.0.1 scope global kube-ipvs0
       inet 10.254.189.67/32 brd 10.254.189.67 scope global kube-ipvs0
       inet 10.254.215.97/32 brd 10.254.215.97 scope global kube-ipvs0
    
    curl -I http://10.254.189.67:80
    HTTP/1.1 200 OK
    Server: nginx/1.17.3
    Date: Wed, 28 Aug 2019 10:10:26 GMT
    Content-Type: text/html
    Content-Length: 612
    Last-Modified: Tue, 13 Aug 2019 08:50:00 GMT
    Connection: keep-alive
    ETag: "5d5279b8-264"
    Accept-Ranges: bytes

部署coredns插件

插件是集群的附件组件,丰富和完善了集群的功能

  #将kubernetes-server-linux-amd64.tar.gz解压后,再解压其中的 kubernetes-src.tar.gz 文件,获取coredns配置文件
  tar -zxvf kubernetes-server-linux-amd64.tar.gz
  cd kubernetes
  mkdir src
  tar -zxvf kubernetes-src.tar.gz -C src
  cd src/cluster/addons/dns/coredns
  cp coredns.yaml.base /etc/kubernetes/coredns.yaml
  sed -i "s/__PILLAR__DNS__DOMAIN__/cluster.local/g" /etc/kubernetes/coredns.yaml
  sed -i "s/__PILLAR__DNS__SERVER__/10.254.0.2/g" /etc/kubernetes/coredns.yaml

  #创建coredns
  kubectl create -f /etc/kubernetes/coredns.yaml
  serviceaccount/coredns created
  clusterrole.rbac.authorization.k8s.io/system:coredns created
  clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
  configmap/coredns created
  deployment.apps/coredns created
  service/kube-dns created
  #检查codedns功能
  kubectl -n kube-system get all -o wide
  NAME                           READY   STATUS             RESTARTS   AGE     IP            NODE           NOMINATED NODE   READINESS GATES
  pod/coredns-5b969f4c88-7l7c9   0/1     ImagePullBackOff   0          4m18s   172.30.87.4   192.168.1.35              

  NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
  service/kube-dns   ClusterIP   10.254.0.2           53/UDP,53/TCP,9153/TCP   4m18s   k8s-app=kube-dns

  NAME                      READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                     SELECTOR
  deployment.apps/coredns   0/1     1            0           4m18s   coredns      k8s.gcr.io/coredns:1.3.1   k8s-app=kube-dns

  NAME                                 DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES                     SELECTOR
  replicaset.apps/coredns-5b969f4c88   1         1         0       4m18s   coredns      k8s.gcr.io/coredns:1.3.1   k8s-app=kube-dns,pod-template-hash=5b969f4c88
  # ImagePullBackOff 镜像下载失败,修改
  sed -i "s/k8s.gcr.io/coredns/g" /etc/kubernetes/coredns.yaml
  kubectl apply -f /etc/kubernetes/coredns.yaml   
  Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
  serviceaccount/coredns configured
  Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
  clusterrole.rbac.authorization.k8s.io/system:coredns configured
  Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
  clusterrolebinding.rbac.authorization.k8s.io/system:coredns configured
  Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
  configmap/coredns configured
  Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
  deployment.apps/coredns configured
  Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
  service/kube-dns configured
  # 再次查看
  kubectl -n kube-system get all -o wide

部署dashboard插件

将kubernetes-server-linux-amd64.tar.gz 解压后,再解压其中的 kubernetes-src.tar.gz 文件。dashboard 对应的目录是:cluster/addons/dashboard ,拷贝dashboard的文件

#配置文件
  cd kubernetes/src/cluster/addons/dashboard
  mkdir -p /etc/kubernetes/dashboard
  cp *.yaml /etc/kubernetes/dashboard/

  cd /etc/kubernetes/dashboard
  sed -i "s@image:.*@image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1@g" dashboard-controller.yaml
  sed -i "/spec/a\  type: NodePort" dashboard-service.yaml
  sed -i "/targetPort/a\    nodePort: 32700" dashboard-service.yaml

  #执行所有
  kubectl create -f /etc/kubernetes/dashboard
  configmap/kubernetes-dashboard-settings created
  serviceaccount/kubernetes-dashboard created
  deployment.apps/kubernetes-dashboard created
  role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
  rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
  secret/kubernetes-dashboard-certs created
  secret/kubernetes-dashboard-key-holder created
  service/kubernetes-dashboard created
  #查看分配的NodePort
  kubectl -n kube-system get all -o wide

  kubectl get pod -o wide -n kube-system      
  NAME                                    READY   STATUS    RESTARTS   AGE    IP            NODE           NOMINATED NODE   READINESS GATES
  coredns-8854569d4-7w7gb                 1/1     Running   0          21m    172.30.87.5   192.168.1.35              
  kubernetes-dashboard-7d5f7c58f5-c5nxc   1/1     Running   0          3m6s   172.30.87.7   192.168.1.35              
  kubectl get svc -o wide -n kube-system 
  NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
  kube-dns               ClusterIP   10.254.0.2               53/UDP,53/TCP,9153/TCP   28m     k8s-app=kube-dns
  kubernetes-dashboard   NodePort    10.254.219.188           443:32700/TCP            3m13s   k8s-app=kubernetes-dashboard
  #此时可访问dashboard https://192.168.1.35:32700,但需要口令,使用帮助命令
  kubectl exec -n kube-system -it kubernetes-dashboard-7d5f7c58f5-c5nxc -- /dashboard --help

  #创建登录token
  kubectl create sa dashboard-admin -n kube-system
  serviceaccount/dashboard-admin created

  kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
  clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created

  ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')
  DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}')
  echo ${DASHBOARD_LOGIN_TOKEN} # 使用输出的DASHBOARD_LOGIN_TOKEN登录

本文来源参照 本文来源参照 本文来源参照


文章作者: SakuraGaara
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 SakuraGaara !
 上一篇
etcd用户及角色权限 etcd用户及角色权限
etcd用户和角色设置:1.etcd默认没有用户2.etcd默认角色guest和root3.etcd默认关闭用户登录认证 创建root用户etcdctl user add root查看用户etcdctl user list开启/关闭
2019-10-10
下一篇 
hive添加用户权限 hive添加用户权限
在前面hadoop+hive+hbase环境里面,hive部分简单的配置了基于MySQL的本地模式安装但是考虑到安全,需要给hive添加认证登陆而且,使用hive命令beeline链接hive,也是强行需要密码的 在之前的hive-si
2019-07-02
  目录