Kubernetes1.9 二进制版集群+ipvs+coredns

本版本用kube-router组件取代kube-proxy,用lvs做svc负载均衡,更快稳定。

用coredns取代kube-dns,更稳定。

经过测试1.9版,消除了以往的 kubelet docker 狂报错误日志的错误 ,更完美。

节点构造如下 :

节点ip 节点角色 hostname
192.168.0.57 node bigdata3
192.168.0.56 node bigdata4
192.168.0.58 node bigdata5
192.168.0.48 master01 ingest01
192.168.0.49 master02 ingest02
192.168.0.50 master03 ingest03
192.168.0.38 etcd01 etcd01
192.168.0.39 etcd02 etcd02
192.168.0.40 etcd03 etcd03

集群网络结构:

网络名称 网络范围
集群网络 172.20.0.0/16
svc网络 172.21.0.0/16
物理网络 192.168.0.0/24

组件配置:

系统 参数
系统 centos7
内核版本 4.4
docker-data数据盘 ext4
docker 1.126
Storage Driver: overlay2
Backing Filesystem: extfs
Logging Driver: journald
Cgroup Driver: systemd

一、所有节点升级内核,安装Docker 1.126

1.1 升级内核

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm ;yum --enablerepo=elrepo-kernel install  kernel-lt-devel kernel-lt -y

#查看默认启动顺序
awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg  

CentOS Linux (4.4.4-1.el7.elrepo.x86_64) 7 (Core)  
CentOS Linux (3.10.0-327.10.1.el7.x86_64) 7 (Core)  
CentOS Linux (0-rescue-c52097a1078c403da03b8eddeac5080b) 7 (Core)

#默认启动的顺序是从0开始,新内核是从头插入(目前位置在0,而4.4.4的是在1),所以需要选择0。

grub2-set-default 0  

#重启
reboot

#检查内核,成功升级到4.4
uname -a
Linux bigdata5 4.4.104-1.el7.elrepo.x86_64 #1 SMP Tue Dec 5 12:46:32 EST 2017 x86_64 x86_64 x86_64 GNU/Linux

1.2 所有节点安装Docker, 修改文件系统为ovelay2驱动

#安装docker
yum install docker-common-1.12.6 docker-client-1.12.6 docker-1.12.6-61 -y

#设置文件系统为ovelay2驱动
 cat /etc/docker/daemon.json
{
  "storage-driver": "overlay2"
}

1.3 所有节点安装ipvsadm

yum install ipvsadm -y

二、准备 k8s-node、master、etcd、flanneld二进制文件

####注意所有的文件由master ingest01这台机下发,配置ssh信任所有机器
####下载目录为/root/
[root@ingest01 ~]# pwd
/root

wget https://dl.k8s.io/v1.9.0/kubernetes-server-linux-amd64.tar.gz

wget https://github.com/coreos/etcd/releases/download/v3.2.11/etcd-v3.2.11-linux-amd64.tar.gz

wget https://github.com/coreos/flannel/releases/download/v0.9.0/flannel-v0.9.0-linux-amd64.tar.gz

三、下发所有二进制文件

3.1 解压

tar xvf kubernetes-server-linux-amd64.tar.gz && tar xvf etcd-v3.2.11-linux-amd64.tar.gz && tar xvf flannel-v0.9.0-linux-amd64.tar.gz

3.2 创建node,master ,etcd所需的二进制目录并进行归类

mkdir -p  /root/kubernetes/server/bin/{node,master,etcd}
mv /root/kubernetes/server/bin/kubelet /root/kubernetes/server/bin/node/
mv /root/mk-docker-opts.sh /root/kubernetes/server/bin/node/
mv /root/flanneld /root/kubernetes/server/bin/node/

mv /root/kubernetes/server/bin/kube-* /root/kubernetes/server/bin/master/
mv /root/kubernetes/server/bin/kubelet /root/kubernetes/server/bin/master/
mv /root/kubernetes/server/bin/kubectl /root/kubernetes/server/bin/master/

mv /root/etcd-v3.2.4-linux-amd64/etcd* /root/kubernetes/server/bin/etcd/

3.3 下发node以及flanneld二进制文件

for node in bigdata3 bigdata4 bigdata5 ingest01;do
    rsync  -avzP   /root/kubernetes/server/bin/node/ ${node}:/usr/local/bin/
done

3.4 下发master 二进制文件

for master in ingest01 ingets01 ingest03;do
    rsync  -avzP   /root/kubernetes/server/bin/master/ ${master}:/usr/local/bin/
done

3.5 下发etcd文件

for etcd in etcd01 etcd02 etcd03;do
    rsync  -avzP   /root/kubernetes/server/bin/etcd/ ${etcd}:/usr/local/bin/
done

四、创建集群systemctl 启动服务service文件

4.1 创建服务归类文件夹

mkdir -p  /root/kubernetes/server/bin/{node-service,master-service,etcd-service,docker-service,ssl}

4.2 创建node 所需的文件

#docker.service
cat >/root/kubernetes/server/bin/node-service/docker.service  <<'HERE'
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target
Wants=docker-storage-setup.service
Requires=docker-cleanup.timer

[Service]
Type=notify
NotifyAccess=all
KillMode=process
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
EnvironmentFile=/run/flannel/docker
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
ExecStart=/usr/bin/dockerd-current  $DOCKER_NETWORK_OPTIONS \
          --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
          --default-runtime=docker-runc \
          --exec-opt native.cgroupdriver=systemd \
          --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
          $OPTIONS \
          $DOCKER_STORAGE_OPTIONS \
          $DOCKER_NETWORK_OPTIONS \
          $ADD_REGISTRY \
          $BLOCK_REGISTRY \
          $INSECURE_REGISTRY
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
Restart=on-abnormal
MountFlags=slave

[Install]
WantedBy=multi-user.target
HERE


----------


#kubeliet.service
cat >/root/kubernetes/server/bin/node-service/kubelet.service  <<'HERE'
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
--address=192.168.0.48 \
--hostname-override=ingest01 \
--pod-infra-container-image=k8s-registry.local/public/pod-infrastructure:sfv1 \
--experimental-bootstrap-kubeconfig=/etc/kubernetes/ssl/bootstrap.kubeconfig \
--kubeconfig=/etc/kubernetes/ssl/kubelet.kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--hairpin-mode promiscuous-bridge \
--allow-privileged=true \
--serialize-image-pulls=false \
--logtostderr=true \
--cgroup-driver=systemd \
--cluster_dns=172.21.0.2 \
--cluster_domain=cluster.local \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

HERE


----------


#flanneld.service

cat >/root/kubernetes/server/bin/node-service/flanneld.service  <<'HERE'
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
ExecStart=/usr/local/bin/flanneld \
-etcd-cafile=/etc/kubernetes/ssl/k8s-root-ca.pem \
-etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
-etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
-etcd-endpoints=https://192.168.0.38:2379,https://192.168.0.39:2379,https://192.168.0.40:2379 \
-etcd-prefix=/kubernetes/network \
-iface=eth0
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
HERE

4.3 创建master 所需service文件

#kube-apiserver.service
cat >/root/kubernetes/server/bin/master-service/kube-apiserver.service  <<'HERE'
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
--advertise-address=192.168.0.48 \
--bind-address=192.168.0.48 \
--insecure-bind-address=127.0.0.1 \
--kubelet-https=true \
--runtime-config=rbac.authorization.k8s.io/v1beta1 \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/etc/kubernetes/ssl/token.csv \
--service-cluster-ip-range=172.21.0.0/16 \
--service-node-port-range=300-9000 \
--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--client-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--service-account-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
--etcd-cafile=/etc/kubernetes/ssl/k8s-root-ca.pem \
--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
--etcd-servers=https://192.168.0.38:2379,https://192.168.0.39:2379,https://192.168.0.40:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/lib/audit.log \
--event-ttl=1h \
--v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

HERE


----------


#kube-controller-manager.service
cat >/root/kubernetes/server/bin/master-service/kube-controller-manager.service  <<'HERE'
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--address=127.0.0.1 \
--master=http://127.0.0.1:8080 \
--allocate-node-cidrs=true \
--service-cluster-ip-range=172.21.0.0/16 \
--cluster-cidr=172.20.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
--root-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--leader-elect=true \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
HERE


----------


#kube-scheduler.service

cat >/root/kubernetes/server/bin/master-service/scheduler.service  <<'HERE'
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--address=127.0.0.1 \
--master=http://127.0.0.1:8080 \
--leader-elect=true \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
HERE

4.4 创建etcd所需service文件
etcd 各节点请自行参照此配置进行更改

cat >/root/kubernetes/server/bin/etcd-service/etcd.service  <<'HERE'
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd \
--name=etcd01 \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--initial-advertise-peer-urls=https://192.168.0.38:2380 \
--listen-peer-urls=https://192.168.0.38:2380 \
--listen-client-urls=https://192.168.0.38:2379,http://127.0.0.1:2379 \
--advertise-client-urls=https://192.168.0.38:2379 \
--initial-cluster-token=etcd-cluster-0 \
--initial-cluster=etcd01=https://192.168.0.38:2380,etcd02=https://192.168.0.39:2380,etcd03=https://192.168.0.40:2380 \
--initial-cluster-state=new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
HERE

五、下发service文件

5.1 下发node所需的service文件

#注意更改service文件中的主机名和ip,每个节点不一样
for node in {bigdata3,bigdata4,bigdata5,ingest01,ingest02,ingest03};do
    rsync  -avzP   /root/kubernetes/server/bin/node-service/ ${node}:/lib/systemd/system/
done

5.2 下发master所需的service文件

#注意更改service文件中的主机名和ip,每个节点不一样
for master in {ingest01,ingest02,ingest03};do
    rsync  -avzP   /root/kubernetes/server/bin/master-service/ ${master}:/lib/systemd/system/
done

5.3 下发etcd所需的service文件

#注意更改service文件中的主机名和ip,每个节点不一样
for master in {etcd01,etcd02,etcd03};do
    rsync  -avzP   /root/kubernetes/server/bin/etcd-service/ ${etcd}:/lib/systemd/system/
done

六、创建集群认证证书文件,下发文件

6.1 生成文件

#安装 CFSSL

#直接使用二进制源码包安装

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

export PATH=/usr/local/bin:$PATH


----------


**#admin-csr.json**
cat >/root/kubernetes/server/bin/ssl/admin-csr.json  <<'HERE'
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shenzhen",
      "L": "Shenzhen",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
HERE


----------
#k8s-gencert.json
cat >/root/kubernetes/server/bin/ssl/k8s-gencert.json  <<'HERE'
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
HERE


----------
#k8s-root-ca-csr.json
cat >/root/kubernetes/server/bin/ssl/k8s-root-ca-csr.json  <<'HERE'
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 4096
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shenzhen",
      "L": "Shenzhen",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
HERE


----------

#kube-proxy-csr.json
cat >/root/kubernetes/server/bin/ssl/kube-proxy-csr.json  <<'HERE'
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shenzhen",
      "L": "Shenzhen",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
HERE


----------
#注意,此处需要将dns首ip、etcd、k8s-master节点的ip都填上
cat >/root/kubernetes/server/bin/ssl/kubernetes-csr.json  <<'HERE'
{
    "CN": "kubernetes",
    "hosts": [
    "127.0.0.1",
    "192.168.0.56",
    "192.168.0.57",
    "192.168.0.58",
    "192.168.0.38",
    "192.168.0.39",
    "192.168.0.40",
    "192.168.0.48",
    "192.168.0.49",
    "192.168.0.50",
    "172.21.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
     ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shenzhen",
            "L": "Shenzhen",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
HERE


----------

6.2 生成通用证书以及kubeconfig

#进入ssl目录
cd /root/kubernetes/server/bin/ssl/
# 生成证书
cfssl gencert --initca=true k8s-root-ca-csr.json | cfssljson --bare k8s-root-ca

for targetName in kubernetes admin kube-proxy; do
    cfssl gencert --ca k8s-root-ca.pem --ca-key k8s-root-ca-key.pem --config k8s-gencert.json --profile kubernetes $targetName-csr.json | cfssljson --bare $targetName
done

# 生成配置
#注意,此处定义api-server的服务ip,此处用HA模式,如果你的master是单节点,请配置成单个api6443的ip即可
#注意关于三台master节点HA高可用请参见我另一篇HA实战
#地址:http://blog.csdn.net/idea77/article/details/71508859

export KUBE_APISERVER="https://127.0.0.1:6443"
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
echo "Tokne: ${BOOTSTRAP_TOKEN}"

cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF



----------

echo "Create kubelet bootstrapping kubeconfig..."
kubectl config set-cluster kubernetes \
  --certificate-authority=k8s-root-ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig


----------


echo "Create kube-proxy kubeconfig..."
kubectl config set-cluster kubernetes \
  --certificate-authority=k8s-root-ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig



----------


kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig



----------


kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig


----------


# 生成高级审计配置
cat >> audit-policy.yaml <<EOF
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
- level: Metadata
EOF


----------


# 生成集群管理员admin kubeconfig配置文件供kubectl调用
# admin set-cluster
 kubectl config set-cluster kubernetes \
    --certificate-authority=k8s-root-ca.pem\
    --embed-certs=true \
    --server=${KUBE_APISERVER} \
    --kubeconfig=./kubeconfig

# admin set-credentials
 kubectl config set-credentials kubernetes-admin \
    --client-certificate=admin.pem \
    --client-key=admin-key.pem \
    --embed-certs=true \
    --kubeconfig=./kubeconfig

# admin set-context
 kubectl config set-context kubernetes-admin@kubernetes \
    --cluster=kubernetes \
    --user=kubernetes-admin \
    --kubeconfig=./kubeconfig

# admin set default context
 kubectl config use-context kubernetes-admin@kubernetes \
    --kubeconfig=./kubeconfig


6.3 下发证书文件至所有节点

#创建ssl文件夹
for node in {bigdata3,bigdata4,bigdata5,ingest01,ingest02,ingest03,etcd01,etcd02,etcd03};do
    ssh ${node} "mkdir -p /etc/kubernetes/ssl/ "
done


----------

#下发文件
for ssl in {bigdata3,bigdata4,bigdata5,ingest01,ingest02,ingest03,etcd01,etcd02,etcd03};do
    rsync  -avzP   /root/kubernetes/server/bin/ssl/  ${ssl}:/etc/kubernetes/ssl/
done

----------

#创建master /root/.kube 目录,复制超级admin授权config
for master in {ingest01,ingest02,ingest03};do
    ssh ${master} "mkdir -p /root/.kube ; \cp -f /etc/kubernetes/ssl/kubeconfig  /root/.kube/config "
done


----------



七、启动所有节点服务,验证服务

注意启动之前确认配置文件修改无误

7.1 启动 etcd 节点服务

#启动etcd集群

for node in {etcd01,etcd02,etcd03};do
    ssh ${node} "systemctl daemon-reload && systemctl start etcd && systemctl enable etcd"
done


----------


#检查集群健康
 etcdctl \
  --ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem\
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  cluster-health


----------
#设置集群网络范围

  etcdctl --endpoints=https://192.168.0.38:2379,https://192.168.0.39:2379,https://192.168.0.40:2379 \
  --ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  mkdir /kubernetes/network


----------


etcdctl --endpoints=https://192.168.0.38:2379,https://192.168.0.39:2379,https://192.168.0.40:2379 \
  --ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem\
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  mk /kubernetes/network/config '{ "Network": "172.20.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}'

----------

7.2 启动master节点服务

#注意关于三台master节点HA高可用请参见我另一篇HA实战
#地址:http://blog.csdn.net/idea77/article/details/71508859

for master in {ingest01,ingest02,ingest03};do
    ssh ${master} "systemctl daemon-reload && systemctl start flanneld docker kube-apiserver kube-controller-manager kube-scheduler kubelet && systemctl enable flanneld docker kube-apiserver kube-controller-manager kube-scheduler kubelet "
done

7.3 启动node节点服务

for node in {bigdata3,bigdata4,bigdata5};do
    ssh ${node} "systemctl daemon-reload && systemctl start flanneld docker kubelet && systemctl enable flanneld docker kubelet "
done

7.4 验证集群

# 在master机器上执行,授权kubelet-bootstrap角色
kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

#通过所有集群认证
kubectl get csr

kubectl get csr | awk '/Pending/ {print $1}' | xargs kubectl certificate approve

#检查node Ready
kubectl  get nodes 
NAME       STATUS    ROLES     AGE       VERSION
bigdata3   Ready     <none>    4d        v1.9.0
bigdata4   Ready     <none>    4d        v1.9.0
bigdata5   Ready     <none>    4d        v1.9.0
ingest01   Ready     <none>    4d        v1.9.0
ingest02   Ready     <none>    4d        v1.9.0
ingest03   Ready     <none>    4d        v1.9.0

八、布署kube-router-ipvs取代kube-proxy、kube-dashboard、core-dns取代kube-dns

8.1 布署kube-router组件

#镜相下载:docker.io/cloudnativelabs/kube-router:latest
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-router-cfg
  namespace: kube-system
  labels:
    tier: node
    k8s-app: kube-router
data:
  cni-conf.json: |
    {
      "name":"kubernetes",
      "type":"bridge",
      "bridge":"kube-bridge",
      "isDefaultGateway":true,
      "ipam": {
        "type":"host-local"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    k8s-app: kube-router
    tier: node
  name: kube-router
  namespace: kube-system
spec:
  template:
    metadata:
      labels:
        k8s-app: kube-router
        tier: node
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      serviceAccountName: kube-router
      serviceAccount: kube-router
      containers:
      - name: kube-router
        image: k8s-registry.local/public/kube-router:latest
        imagePullPolicy: Always
        args:
        - --run-router=true
        - --run-firewall=true
        - --run-service-proxy=true
        - --kubeconfig=/var/lib/kube-router/kubeconfig
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        resources:
          requests:
            cpu: 250m
            memory: 250Mi
        securityContext:
          privileged: true
        volumeMounts:
        - name: lib-modules
          mountPath: /lib/modules
          readOnly: true
        - name: cni-conf-dir
          mountPath: /etc/cni/net.d
        - name: kubeconfig
          mountPath: /var/lib/kube-router/kubeconfig
        - name: run
          mountPath: /var/run/docker.sock
          readOnly: true
      initContainers:
      - name: install-cni
        image: k8s-registry.local/public/busybox:latest
        imagePullPolicy: Always
        command:
        - /bin/sh
        - -c
        - set -e -x;
          if [ ! -f /etc/cni/net.d/10-kuberouter.conf ]; then
            TMP=/etc/cni/net.d/.tmp-kuberouter-cfg;
            cp /etc/kube-router/cni-conf.json ${TMP};
            mv ${TMP} /etc/cni/net.d/10-kuberouter.conf;
          fi
        volumeMounts:
        - name: cni-conf-dir
          mountPath: /etc/cni/net.d
        - name: kube-router-cfg
          mountPath: /etc/kube-router
      hostNetwork: true
      hostIPC: true
      hostPID: true
      tolerations:
      - key: CriticalAddonsOnly
        operator: Exists
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
        operator: Exists
      volumes:
      - name: lib-modules
        hostPath:
          path: /lib/modules
      - name: cni-conf-dir
        hostPath:
          path: /etc/cni/net.d
      - name: run
        hostPath:
          path: /var/run/docker.sock
      - name: kube-router-cfg
        configMap:
          name: kube-router-cfg
      - name: kubeconfig
        hostPath:
          path: /etc/kubernetes/ssl/kubeconfig
       # configMap:
        #  name: kube-proxy
         # items:
         # - key: kubeconfig.conf
         #   path: kubeconfig
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-router
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: kube-router
  namespace: kube-system
rules:
  - apiGroups:
    - ""
    resources:
      - namespaces
      - pods
      - services
      - nodes
      - endpoints
    verbs:
      - list
      - get
      - watch
  - apiGroups:
    - "networking.k8s.io"
    resources:
      - networkpolicies
    verbs:
      - list
      - get
      - watch
  - apiGroups:
    - extensions
    resources:
      - networkpolicies
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: kube-router
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-router
subjects:
- kind: ServiceAccount
  name: kube-router
  namespace: kube-system



kubectl create -f kube-router.yaml

8.2 布署 kube-dashboard

#镜相下载:registry.docker-cn.com/kubernetesdashboarddev/kubernetes-dashboard-amd64:head
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      serviceAccountName: kubernetes-dashboard
      containers:
      - name: kubernetes-dashboard
        image: k8s-registry.local/public/kubernetes-dashboard-amd64:1.8.0
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 9090
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"


----------


kubectl create -f dashboard.yaml


----------
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort
  ports:
  - port: 9090
    targetPort: 9090
    nodePort: 8601


kubectl create -f dashboard-svc.yaml

8.3 布署coredns

#镜相下载地址: registry.docker-cn.com/coredns/coredns:0.9.10
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        log stdout
        health
        kubernetes cluster.local 172.21.0.0/16
        prometheus
        proxy . /etc/resolv.conf
        cache 30
    }
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: coredns
  template:
    metadata:
      labels:
        k8s-app: coredns
    spec:
      serviceAccountName: coredns
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      containers:
      - name: coredns
        image: k8s-registry.local/public/coredns:0.9.10
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: coredns
  clusterIP: 172.21.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP


----------
kubectl create -f coredns.yaml

 

K8S中文社区微信公众号

评论 28

登录后评论

立即登录  

  1. #22

    本来想发篇文章的,一看就放弃了,这速度也是没谁了,支持

    小步调6年前 (2017-12-18)
  2. #21

    centos 7 内核 3.x ,升级后是否有啥问题?目前使用 3.x ovelay1 17.03 还算稳定,ovelay2 这个是否已经成熟稳定???

    lanmingle6年前 (2017-12-19)
  3. #20

    Jan 01 21:16:50 k8s-mini01 kubelet[4408]: I0101 21:16:50.264401 4408 kubelet_node_status.go:273] Setting node annotation toh
    Jan 01 21:16:50 k8s-mini01 kubelet[4408]: I0101 21:16:50.269357 4408 kubelet_node_status.go:431] Recording NodeHasSufficien1
    Jan 01 21:16:50 k8s-mini01 kubelet[4408]: I0101 21:16:50.269410 4408 kubelet_node_status.go:431] Recording NodeHasSufficieni01
    Jan 01 21:16:50 k8s-mini01 kubelet[4408]: I0101 21:16:50.269428 4408 kubelet_node_status.go:431] Recording NodeHasNoDiskPre1
    Jan 01 21:16:50 k8s-mini01 kubelet[4408]: I0101 21:16:50.269456 4408 kubelet_node_status.go:82] Attempting to register node
    Jan 01 21:16:50 k8s-mini01 kubelet[4408]: E0101 21:16:50.274288 4408 kubelet_node_status.go:106] Unable to register node k-mini01 is forbidden: node k8s-master cannot modify node k8s-mini01
    ^C
    请问node 节点报这个错误是怎么回事?

    无法淡忘的回忆6年前 (2018-01-02)
  4. #19

    感谢大神
    在实践中遇到小问题 ,访问apiserver 报如下错误,基本上我的是完全安装您的配置配的
    {
    kind: Status,
    apiVersion: v1,
    metadata: {

    },
    status: Failure,
    message: forbidden: User \system:anonymous\ cannot get path \/\,
    reason: Forbidden,
    details: {

    },
    code: 403
    }

    陈志远6年前 (2018-01-08)
  5. #18

    我也遇到这个问题,我这边建了一个/etc/kubernetes/basic_auth_file
    admin,admin,1002
    在kube-apiserver.service里面加上
    –basic-auth-file=/etc/kubernetes/basic_auth_file \
    –anonymous-auth=false \
    然后还要绑定用户:
    kubectl create clusterrolebinding login-on-dashboard-with-cluster-admin –clusterrole=cluster-admin –user=admin
    最后重启kube-apiserver这个问题可以过。

    范范哈6年前 (2018-01-09)
  6. #17

    我遇到了一个问题,访问kube-dashboard的时候,提示
    Error: ‘dial tcp 172.17.0.2:9090: getsockopt: no route to host’
    Trying to reach: ‘http://172.17.0.2:9090/’

    kubectl logs kube-dashboard 里面有这样一句话:
    Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
    感觉好像没能连通

    范范哈6年前 (2018-01-09)
  7. #16

    感谢大神8faa6ab2df4780fcc3f34c0e267b8a5e 在实践中遇到小问题 ,访问apiserver 报如下错误,基本上我的是完全安装您的配置配的 { kind: Status, apiVersion: v1, metadata: { }, status: Failure, message: forbidden: User \system:anonymous\ cannot get path \/\, reason: Forbidden, details: { }, code: 403 }

    安之若素6年前 (2018-01-09)
  8. #15

    为什么第二步wget下载三个tar包只有第二个能下下来,别的都下不了,等了一会提示我Unable to establish SSL connection.要么就是Connection timed out

    Hobbit6年前 (2018-01-15)
  9. #14

    为什么第二步wget下载三个tar包只有第二个能下下来,别的都下不了,等了一会提示我Unable to establish SSL connection.要么就是Network is unreachable

    Hobbit6年前 (2018-01-15)
  10. #13

    为啥我这解压后没有kubelet这个文件啊,求大神指点

    Hobbit6年前 (2018-01-17)
  11. #12

    svc网络和集群网络是什么意思呀?

    Hobbit6年前 (2018-01-18)
  12. #11

    访问apiserver 报如下错误,基本上我的是完全安装您的配置配的 { kind: Status, apiVersion: v1, metadata: { }, status: Failure, message: forbidden: User \system:anonymous\ cannot get path \/\, reason: Forbidden, details: { }, code: 403 }

    瓜子壳壳6年前 (2018-01-20)
  13. #10

    发错了

    慕空6年前 (2018-01-24)
  14. #9

    error: failed to run Kubelet: unable to load bootstrap kubeconfig: invalid configuration: no configuration has been provided
    启动node的时候一直报这个错误。

    慕空6年前 (2018-01-25)
    • 有多种可能你可以一一进行检查。
      1. 是否关闭了swap,方法,注释掉 /etc/fstab中 swap的一行。然后运行命令 :swapoff on
      2. 检索 /etct/kubernetes/ssl/token.csv文件中的随机数与同目录下的bootstrap.kuebeconfig中的随机数是否一致,这个检查要所有的节点上进行检查。
      3. 如果是虚拟机可能有两块网卡的情况,那就要检查/lib/systemd/system/flanneld.service.文件中的iface是否进行正确的设置,这个网卡应该是你集群真正使用的几个结点间互通的网卡。

      总之,看日志是最有效的,一定要把/var/log/messages及时输出,并过滤出error或faild记录,来查看错误信息。

      龙门6年前 (2018-01-26)
    • 有多种可能你可以一一进行检查。
      1. 是否关闭了swap,方法,注释掉 /etc/fstab中 swap的一行。然后运行命令 :swapoff -a
      2. 检索 /etct/kubernetes/ssl/token.csv文件中的随机数与同目录下的bootstrap.kuebeconfig中的随机数是否一致,这个检查要所有的节点上进行检查。
      3. 如果是虚拟机可能有两块网卡的情况,那就要检查/lib/systemd/system/flanneld.service.文件中的iface是否进行正确的设置,这个网卡应该是你集群真正使用的几个结点间互通的网卡。

      总之,看日志是最有效的,一定要把/var/log/messages及时输出,并过滤出error或faild记录,来查看错误信息。

      龙门6年前 (2018-01-26)
    • 怎么都是报这个错的?我也是

      童话不完美6年前 (2018-03-10)
      • kubectl config use-context default –kubeconfig=bootstrap.kubeconfig

        为缘而生6年前 (2018-04-07)
    • 怎么都是报这个错的?我也是

      童话不完美6年前 (2018-03-10)
  15. #8

    不好意思,第1条写错了,是swapoff -a

    龙门6年前 (2018-01-26)
  16. #7

    大神,你这几个镜像文件再哪里可以下载啊,可以分享下吗?谢谢!
    k8s-registry.local/public/pod-infrastructure:sfv1
    registry.docker-cn.com/coredns/coredns:0.9.10
    registry.docker-cn.com/kubernetesdashboarddev/kubernetes-dashboard-amd64:head
    registry.docker-cn.com/coredns/coredns:0.9.10

    挽留所有不能的6年前 (2018-01-26)
  17. #6

    50e0b17cff383ec2d81f5f537805f7b8哈哈哈

    吹雪6年前 (2018-01-27)
  18. #5

    最终怎么解决的 ,我查了这几个问题 还是一直报错

    五冬六春6年前 (2018-02-05)
    • contexts:
      – context:
      cluster: kubernetes
      user: kubelet-bootstrap
      name: default
      current-context: default
      kind: Config
      ##############
      按照步骤是没有current-context,加上就好

      为缘而生6年前 (2018-04-07)
  19. #4

    一直报这个错:
    Mar 17 18:30:11 node1 systemd: Starting Kubernetes Kubelet…
    Mar 17 18:30:11 node1 systemd: Failed at step CHDIR spawning /usr/local/bin/kubelet: No such file or directory
    Mar 17 18:30:11 node1 systemd: kubelet.service: main process exited, code=exited, status=200/CHDIR
    Mar 17 18:30:11 node1 systemd: Unit kubelet.service entered failed state.
    Mar 17 18:30:11 node1 systemd: kubelet.service failed.
    Mar 17 18:30:16 node1 systemd: kubelet.service holdoff time over, scheduling restart.
    Mar 17 18:30:16 node1 systemd: Started Kubernetes Kubelet.

    GreatJahn6年前 (2018-03-17)
  20. #3

    kube-router启动报错:
    I0328 10:31:59.507805 1905 server.go:210] Running /usr/local/bin/kube-router version v0.1.0, built on 2018-03-19, go1.8.7
    I0328 10:31:59.534057 1905 health_controller.go:127] Starting health controller
    I0328 10:31:59.581882 1905 network_policy_controller.go:108] Starting network policy controller
    I0328 10:31:59.678408 1905 network_services_controller.go:109] Starting network services controller
    E0328 10:32:00.147061 1905 network_routes_controller.go:190] Failed to enable netfilter for bridge. Network policies and service proxy may not work: exit status 1
    I0328 10:32:00.147143 1905 network_routes_controller.go:200] Starting network route controller
    I0328 10:32:00.356102 1905 network_routes_controller.go:1429] Could not find BGP peer info for the node in the node annotations so skipping configuring peer.

    谁解决了?

    网友5299438126年前 (2018-03-28)
  21. #2

    kube-dashboard访问web无需登陆,请问怎么解决啊

    浮生如梦6年前 (2018-04-08)
  22. #1

    在master节点执行:kubectl create –insecure-skip-tls-verify clusterrolebinding kubelet-bootstrap –clusterrole=system:node-bootstrapper –user=kubelet-bootstrap 解决
    老哥,你这教程,,,,哎

    如果当时886年前 (2018-04-16)