kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。
最近发布的Kubernetes 1.15中,kubeadm对HA集群的配置已经达到beta可用,说明kubeadm距离生产环境中可用的距离越来越近了。
1.准备
1.1系统配置
在安装之前,需要先做如下准备。两台CentOS 7.6主机如下:
cat /etc/hosts 192.168.99.11 node1 192.168.99.12 node2
如果各个主机启用了防火墙,需要开放Kubernetes各个组件所需要的端口,可以查看Installing kubeadm中的”Check required ports”一节。 这里简单起见在各节点禁用防火墙:
systemctl stop firewalld systemctl disable firewalld
禁用SELINUX:
setenforce 0
vi /etc/selinux/config SELINUX=disabled
创建/etc/sysctl.d/k8s.conf文件,添加如下内容:
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
执行命令使修改生效。
modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf
1.2kube-proxy开启ipvs的前置条件
由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:
ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4
在所有的Kubernetes节点node1和node2上执行以下脚本:
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。
接下来还需要确保各个节点上已经安装了ipset软件包yum install ipset。 为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm yum install ipvsadm。
如果以上前提条件如果不满足,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。
1.3安装Docker
Kubernetes从1.6开始使用CRI(Container Runtime Interface)容器运行时接口。默认的容器运行时仍然是Docker,使用的是kubelet中内置dockershim CRI实现。
安装docker的yum源:
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo
查看最新的Docker版本:
yum list docker-ce.x86_64 --showduplicates |sort -r docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
Kubernetes 1.15当前支持的docker版本列表是1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09。 这里在各节点安装docker的18.09.7版本。
yum makecache fast yum install -y --setopt=obsoletes=0 \ docker-ce-18.09.7-3.el7 systemctl start docker systemctl enable docker
确认一下iptables filter表中FOWARD链的默认策略(pllicy)为ACCEPT。
iptables -nvL Chain INPUT (policy ACCEPT 263 packets, 19209 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
1.4 修改docker cgroup driver为systemd
根据文档CRI installation中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上docker的cgroup driver为systemd。
创建或修改/etc/docker/daemon.json:
{ "exec-opts": ["native.cgroupdriver=systemd"] }
重启docker:
systemctl restart docker docker info | grep Cgroup Cgroup Driver: systemd
2.使用kubeadm部署Kubernetes
2.1 安装kubeadm和kubelet
下面在各节点安装kubeadm和kubelet:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
测试地址https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64是否可用,如果不可用需要科学上网。
curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
yum makecache fast yum install -y kubelet kubeadm kubectl ... Installed: kubeadm.x86_64 0:1.15.0-0 kubectl.x86_64 0:1.15.0-0 kubelet.x86_64 0:1.15.0-0 Dependency Installed: conntrack-tools.x86_64 0:1.4.4-4.el7 cri-tools.x86_64 0:1.12.0-0 kubernetes-cni.x86_64 0:0.7.5-0 libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
从安装结果可以看出还安装了cri-tools, kubernetes-cni, socat三个依赖:
- 官方从Kubernetes 1.14开始将cni依赖升级到了0.7.5版本
- socat是kubelet的依赖
- cri-tools是CRI(Container Runtime Interface)容器运行时接口的命令行工具
运行kubelet –help可以看到原来kubelet的绝大多数命令行flag参数都被DEPRECATED了,如:
...... --address 0.0.0.0 The IP address for the Kubelet to serve on (set to 0.0.0.0 for all IPv4 interfaces and `::` for all IPv6 interfaces) (default 0.0.0.0) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ......
而官方推荐我们使用–config指定配置文件,并在配置文件中指定原来这些flag所配置的内容。具体内容可以查看这里Set Kubelet parameters via a config file。这也是Kubernetes为了支持动态Kubelet配置(Dynamic Kubelet Configuration)才这么做的,参考Reconfigure a Node’s Kubelet in a Live Cluster。
kubelet的配置文件必须是json或yaml格式,具体可查看这里。
Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。 关闭系统的Swap方法如下:
swapoff -a
修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。 swappiness参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行:
vm.swappiness=0
执行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。
因为这里本次用于测试两台主机上还运行其他服务,关闭swap可能会对其他服务产生影响,所以这里修改kubelet的配置去掉这个限制。 使用kubelet的启动参数–fail-swap-on=false去掉必须关闭Swap的限制,修改/etc/sysconfig/kubelet,加入:
KUBELET_EXTRA_ARGS=--fail-swap-on=false
2.2 使用kubeadm init初始化集群
在各节点开机启动kubelet服务:
systemctl enable kubelet.service
使用kubeadm config print init-defaults可以打印集群初始化默认的使用的配置:
apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 1.2.3.4 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: node1 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: v1.14.0 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 scheduler: {}
从默认的配置中可以看到,可以使用imageRepository定制在集群初始化时拉取k8s所需镜像的地址。基于默认配置定制出本次使用kubeadm初始化集群所需的配置文件kubeadm.yaml:
apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.99.11 bindPort: 6443 nodeRegistration: taints: - effect: PreferNoSchedule key: node-role.kubernetes.io/master --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.15.0 networking: podSubnet: 10.244.0.0/16
使用kubeadm默认配置初始化的集群,会在master节点打上node-role.kubernetes.io/master:NoSchedule的污点,阻止master节点接受调度运行工作负载。这里测试环境只有两个节点,所以将这个taint修改为node-role.kubernetes.io/master:PreferNoSchedule。
在开始初始化集群之前可以使用kubeadm config images pull预先在各个节点上拉取所k8s需要的docker镜像。
接下来使用kubeadm初始化集群,选择node1作为Master Node,在node1上执行下面的命令:
kubeadm init --config kubeadm.yaml --ignore-preflight-errors=Swap [init] Using Kubernetes version: v1.15.0 [preflight] Running pre-flight checks [WARNING Swap]: running with swap on is not supported. Please disable swap [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "etcd/ca" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [192.168.99.11 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [192.168.99.11 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.99.11] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 26.004907 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node node1 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule] [bootstrap-token] Using token: 4qcl2f.gtl3h8e5kjltuo0r [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.99.11:6443 --token 4qcl2f.gtl3h8e5kjltuo0r \ --discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e
上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。 其中有以下关键内容:
- [kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”
- [certs]生成相关的各种证书
- [kubeconfig]生成相关的kubeconfig文件
- [control-plane]使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod
- [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
- 下面的命令是配置常规用户如何使用kubectl访问集群:
-
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
- 最后给出了将节点加入集群的命令kubeadm join 192.168.99.11:6443 –token 4qcl2f.gtl3h8e5kjltuo0r \ –discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e
查看一下集群状态,确认个组件都处于healthy状态:
kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"}
集群初始化如果遇到问题,可以使用下面的命令进行清理:
kubeadm reset ifconfig cni0 down ip link delete cni0 ifconfig flannel.1 down ip link delete flannel.1 rm -rf /var/lib/cni/
2.3 安装Pod Network
接下来安装flannel network add-on:
mkdir -p ~/k8s/ cd ~/k8s curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created
这里注意kube-flannel.yml这个文件里的flannel的镜像是0.11.0,quay.io/coreos/flannel:v0.11.0-amd64
如果Node有多个网卡的话,参考flannel issues 39701,目前需要在kube-flannel.yml中使用–iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上–iface=<iface-name>
containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr - --iface=eth1 ......
使用kubectl get pod –all-namespaces -o wide确保所有的Pod都处于Running状态。
kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-5c98db65d4-dr8lf 1/1 Running 0 52m coredns-5c98db65d4-lp8dg 1/1 Running 0 52m etcd-node1 1/1 Running 0 51m kube-apiserver-node1 1/1 Running 0 51m kube-controller-manager-node1 1/1 Running 0 51m kube-flannel-ds-amd64-mm296 1/1 Running 0 44s kube-proxy-kchkf 1/1 Running 0 52m kube-scheduler-node1 1/1 Running 0 51m
2.4 测试集群DNS是否可用
kubectl run curl --image=radial/busyboxplus:curl -it kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead. If you don't see a command prompt, try pressing enter. [ root@curl-5cc7b478b6-r997p:/ ]$
进入后执行nslookup kubernetes.default确认解析正常:
nslookup kubernetes.default Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes.default Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
2.5 向Kubernetes集群中添加Node节点
下面将node2这个主机添加到Kubernetes集群中,在node2上执行:
kubeadm join 192.168.99.11:6443 --token 4qcl2f.gtl3h8e5kjltuo0r \ --discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e \ --ignore-preflight-errors=Swap [preflight] Running pre-flight checks [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
node2加入集群很是顺利,下面在master节点上执行命令查看集群中的节点:
kubectl get node NAME STATUS ROLES AGE VERSION node1 Ready master 57m v1.15.0 node2 Ready <none> 11s v1.15.0
2.5.1 如何从集群中移除Node
如果需要从集群中移除node2这个Node执行下面的命令:
在master节点上执行:
kubectl drain node2 --delete-local-data --force --ignore-daemonsets kubectl delete node node2
在node2上执行:
kubeadm reset ifconfig cni0 down ip link delete cni0 ifconfig flannel.1 down ip link delete flannel.1 rm -rf /var/lib/cni/
在node1上执行:
kubectl delete node node2
2.6 kube-proxy开启ipvs
修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”
kubectl edit cm kube-proxy -n kube-system
之后重启各个节点上的kube-proxy pod:
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
kubectl get pod -n kube-system | grep kube-proxy kube-proxy-7fsrg 1/1 Running 0 3s kube-proxy-k8vhm 1/1 Running 0 9s kubectl logs kube-proxy-7fsrg -n kube-system I0703 04:42:33.308289 1 server_others.go:170] Using ipvs Proxier. W0703 04:42:33.309074 1 proxier.go:401] IPVS scheduler not specified, use rr by default I0703 04:42:33.309831 1 server.go:534] Version: v1.15.0 I0703 04:42:33.320088 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0703 04:42:33.320365 1 config.go:96] Starting endpoints config controller I0703 04:42:33.320393 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller I0703 04:42:33.320455 1 config.go:187] Starting service config controller I0703 04:42:33.320470 1 controller_utils.go:1029] Waiting for caches to sync for service config controller I0703 04:42:33.420899 1 controller_utils.go:1036] Caches are synced for endpoints config controller I0703 04:42:33.420969 1 controller_utils.go:1036] Caches are synced for service config controller
日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启。
3.Kubernetes常用组件部署
越来越多的公司和团队开始使用Helm这个Kubernetes的包管理器,这里也将使用Helm安装Kubernetes的常用组件。
3.1 Helm的安装
Helm由客户端命helm令行工具和服务端tiller组成,Helm的安装十分简单。 下载helm命令行工具到master节点node1的/usr/local/bin下,这里下载的2.14.1版本:
curl -O https://get.helm.sh/helm-v2.14.1-linux-amd64.tar.gz tar -zxvf helm-v2.14.1-linux-amd64.tar.gz cd linux-amd64/ cp helm /usr/local/bin/
为了安装服务端tiller,还需要在这台机器上配置好kubectl工具和kubeconfig文件,确保kubectl工具可以在这台机器上访问apiserver且正常使用。 这里的node1节点已经配置好了kubectl。
因为Kubernetes APIServer开启了RBAC访问控制,所以需要创建tiller使用的service account: tiller并分配合适的角色给它。 详细内容可以查看helm文档中的Role-based Access Control。 这里简单起见直接分配cluster-admin这个集群内置的ClusterRole给它。创建helm-rbac.yaml文件:
apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
kubectl create -f helm-rbac.yaml serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created
helm init --service-account tiller --skip-refresh Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation Happy Helming!
kubectl get pod -n kube-system -l app=helm NAME READY STATUS RESTARTS AGE tiller-deploy-c4fd4cd68-dwkhv 1/1 Running 0 83s
helm version Client: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
注意由于某些原因需要网络可以访问gcr.io和kubernetes-charts.storage.googleapis.com,如果无法访问可以通过helm init –service-account tiller –tiller-image <your-docker-registry>/tiller:v2.13.1 –skip-refresh使用私有镜像仓库中的tiller镜像
最后在node1上修改helm chart仓库的地址为azure提供的镜像地址:
helm repo add stable http://mirror.azure.cn/kubernetes/charts "stable" has been added to your repositories helm repo list NAME URL stable http://mirror.azure.cn/kubernetes/charts local http://127.0.0.1:8879/charts
3.2 使用Helm部署Nginx Ingress
为了便于将集群中的服务暴露到集群外部,需要使用Ingress。接下来使用Helm将Nginx Ingress部署到Kubernetes上。 Nginx Ingress Controller被部署在Kubernetes的边缘节点上,关于Kubernetes边缘节点的高可用相关的内容可以查看之前整理的Bare metal环境下Kubernetes Ingress边缘节点的高可用,Ingress Controller使用hostNetwork。
我们将node1(192.168.99.11)做为边缘节点,打上Label:
kubectl label node node1 node-role.kubernetes.io/edge= node/node1 labeled kubectl get node NAME STATUS ROLES AGE VERSION node1 Ready edge,master 138m v1.15.0 node2 Ready <none> 82m v1.15.0
stable/nginx-ingress chart的值文件ingress-nginx.yaml如下:
controller: replicaCount: 1 hostNetwork: true nodeSelector: node-role.kubernetes.io/edge: '' affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - nginx-ingress - key: component operator: In values: - controller topologyKey: kubernetes.io/hostname tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/master operator: Exists effect: PreferNoSchedule defaultBackend: nodeSelector: node-role.kubernetes.io/edge: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/master operator: Exists effect: PreferNoSchedule
nginx ingress controller的副本数replicaCount为1,将被调度到node1这个边缘节点上。这里并没有指定nginx ingress controller service的externalIPs,而是通过hostNetwork: true设置nginx ingress controller使用宿主机网络。
helm repo update helm install stable/nginx-ingress \ -n nginx-ingress \ --namespace ingress-nginx \ -f ingress-nginx.yaml
kubectl get pod -n ingress-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ingress-controller-cc9b6d55b-pr8vr 1/1 Running 0 10m 192.168.99.11 node1 <none> <none> nginx-ingress-default-backend-cc888fd56-bf4h2 1/1 Running 0 10m 10.244.0.14 node1 <none> <none>
如果访问http://192.168.99.11返回default backend,则部署完成。
3.3 使用Helm部署dashboard
kubernetes-dashboard.yaml:
image: repository: k8s.gcr.io/kubernetes-dashboard-amd64 tag: v1.10.1 ingress: enabled: true hosts: - k8s.frognew.com annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" tls: - secretName: frognew-com-tls-secret hosts: - k8s.frognew.com nodeSelector: node-role.kubernetes.io/edge: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/master operator: Exists effect: PreferNoSchedule rbac: clusterAdminRole: true
helm install stable/kubernetes-dashboard \ -n kubernetes-dashboard \ --namespace kube-system \ -f kubernetes-dashboard.yaml
kubectl -n kube-system get secret | grep kubernetes-dashboard-token kubernetes-dashboard-token-pkm2s kubernetes.io/service-account-token 3 3m7s kubectl describe -n kube-system secret/kubernetes-dashboard-token-pkm2s Name: kubernetes-dashboard-token-pkm2s Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: kubernetes-dashboard kubernetes.io/service-account.uid: 2f0781dd-156a-11e9-b0f0-080027bb7c43 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1wa20ycyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjJmMDc4MWRkLTE1NmEtMTFlOS1iMGYwLTA4MDAyN2JiN2M0MyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.24ad6ZgZMxdydpwlmYAiMxZ9VSIN7dDR7Q6-RLW0qC81ajXoQKHAyrEGpIonfld3gqbE0xO8nisskpmlkQra72-9X6sBPoByqIKyTsO83BQlME2sfOJemWD0HqzwSCjvSQa0x-bUlq9HgH2vEXzpFuSS6Svi7RbfzLXlEuggNoC4MfA4E2hF1OX_ml8iAKx-49y1BQQe5FGWyCyBSi1TD_-ZpVs44H5gIvsGK2kcvi0JT4oHXtWjjQBKLIWL7xxyRCSE4HmUZT2StIHnOwlX7IEIB0oBX4mPg2_xNGnqwcu-8OERU9IoqAAE2cZa0v3b5O2LMcJPrcxrVOukvRIumA
在dashboard的登录窗口使用上面的token登录。
3.4 使用Helm部署metrics-server
从Heapster的github https://github.com/kubernetes/heapster中可以看到已经,heapster已经DEPRECATED。 这里是heapster的deprecation timeline。 可以看出heapster从Kubernetes 1.12开始从Kubernetes各种安装脚本中移除。
Kubernetes推荐使用metrics-server。我们这里也使用helm来部署metrics-server。
metrics-server.yaml:
args: - --logtostderr - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP nodeSelector: node-role.kubernetes.io/edge: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/master operator: Exists effect: PreferNoSchedule
helm install stable/metrics-server \ -n metrics-server \ --namespace kube-system \ -f metrics-server.yaml
使用下面的命令可以获取到关于集群节点基本的指标信息:
kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% node1 650m 32% 1276Mi 73% node2 73m 3% 527Mi 30%
kubectl top pod -n kube-system NAME CPU(cores) MEMORY(bytes) coredns-5c98db65d4-dr8lf 8m 7Mi coredns-5c98db65d4-lp8dg 6m 8Mi etcd-node1 44m 46Mi kube-apiserver-node1 74m 295Mi kube-controller-manager-node1 35m 50Mi kube-flannel-ds-amd64-7lwm9 2m 8Mi kube-flannel-ds-amd64-mm296 5m 9Mi kube-proxy-7fsrg 1m 11Mi kube-proxy-k8vhm 3m 11Mi kube-scheduler-node1 8m 15Mi kubernetes-dashboard-848b8dd798-c4sc2 2m 14Mi metrics-server-8456fb6676-fwh2t 10m 19Mi tiller-deploy-7bf78cdbf7-9q94c 1m 16Mi
遗憾的是,当前Kubernetes Dashboard还不支持metrics-server。因此如果使用metrics-server替代了heapster,将无法在dashboard中以图形展示Pod的内存和CPU情况(实际上这也不是很重要,当前我们是在Prometheus和Grafana中定制的Kubernetes集群中各个Pod的监控,因此在dashboard中查看Pod内存和CPU也不是很重要)。 Dashboard的github上有很多这方面的讨论,如https://github.com/kubernetes/dashboard/issues/2986,Dashboard已经准备在将来的某个时间点支持metrics-server。但由于metrics-server和metrics pipeline肯定是Kubernetes在monitor方面未来的方向,所以推荐使用metrics-server。
4.总结
本次安装涉及到的Docker镜像:
# network and dns quay.io/coreos/flannel:v0.11.0-amd64 k8s.gcr.io/coredns:1.3.1 # helm and tiller gcr.io/kubernetes-helm/tiller:v2.14.1 # nginx ingress quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1 k8s.gcr.io/defaultbackend:1.5 # dashboard and metric-sever k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 gcr.io/google_containers/metrics-server-amd64:v0.3.2
参考
- Installing kubeadm
- Using kubeadm to Create a Cluster
- Get Docker CE for CentOS
- kubernetes: k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
作者:青蛙小白
原文:https://blog.frognew.com/2019/07/kubeadm-install-kubernetes-1.15.html
终于更新了啊,感谢
请问教程中所涉及到的镜像的下载 国内的yum源可以下载吗
从国内下吧,教程中涉及都找不到,我是从egistry.cn-hangzhou.aliyuncs.com/google_containers下的
一直master NotReady,请教下怎么解决。
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 11m v1.14.2
[root@master ~]# kubectl describe node master
看到有
Warning FailedNodeAllocatableEnforcement 7m9s kubelet, master Failed to update Node Allocatable Limits [“kubepods”]: failed to set supported cgroup subsystems for cgroup [kubepods]: Failed to find subsystem mount for required subsystem: pids
…… 一直在starting
Normal Starting 12s kubelet, master Starting kubelet.
Normal NodeHasSufficientMemory 12s kubelet, master Node master status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12s kubelet, master Node master status is now: NodeHasNoDiskPressure
Normal Starting 1s kubelet, master Starting kubelet.
Normal NodeHasSufficientMemory 1s kubelet, master Node master status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 1s kubelet, master Node master status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 1s kubelet, master Node master status is now: NodeHasSufficientPID
master NotReady的问题再补充下信息
[root@master ~]# kubectl describe node master
Normal Starting 22s kubelet, master Starting kubelet.
Normal NodeHasSufficientMemory 22s kubelet, master Node master status is now: NodeHasSufficientMemory
Normal Starting 12s kubelet, master Starting kubelet.
Normal NodeHasSufficientMemory 12s kubelet, master Node master status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12s kubelet, master Node master status is now: NodeHasNoDiskPressure
Normal Starting 1s kubelet, master Starting kubelet.
Normal NodeHasSufficientMemory 1s kubelet, master Node master status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 1s kubelet, master Node master status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 1s kubelet, master Node master status is now: NodeHasSufficientPID
学习了。期待能出一篇关于custom-metrics和HPA的教程
跟着实践了一下,又是一个对小白十分不友好的教程
要科学上网才能完全按照教程走,根据自己实际情况略微调整了下,磕绊了下总算安装完成,谢谢楼主
您好!我在node节点加入集群时报错:7月 16 14:26:03 node1 kubelet[37264]: Flag –cgroup-driver has been deprecated, This parameter should be set via the config f
7月 16 14:26:03 node1 kubelet[37264]: Flag –fail-swap-on has been deprecated, This parameter should be set via the config fi
7月 16 14:26:03 node1 kubelet[37264]: Flag –cgroup-driver has been deprecated, This parameter should be set via the config f
7月 16 14:26:03 node1 kubelet[37264]: Flag –fail-swap-on has been deprecated, This parameter should be set via the config fi
7月 16 14:26:03 node1 kubelet[37264]: I0716 14:26:03.266321 37264 server.go:425] Version: v1.15.0
7月 16 14:26:03 node1 kubelet[37264]: I0716 14:26:03.266548 37264 plugins.go:103] No cloud provider specified.
7月 16 14:26:03 node1 kubelet[37264]: W0716 14:26:03.266562 37264 server.go:564] standalone mode, no API client
7月 16 14:26:03 node1 kubelet[37264]: F0716 14:26:03.266574 37264 server.go:273] failed to run Kubelet: no client provided,
7月 16 14:26:03 node1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
7月 16 14:26:03 node1 systemd[1]: Unit kubelet.service entered failed state.
7月 16 14:26:03 node1 systemd[1]: kubelet.service failed.
这个一般是什么原因呢?
kubernetes1.15版本的yum源请问大佬有没有,可否共享下
昨晚实在没办法,移动和电信都下载不了kubernetes 1.15.1,还有etcd,安装蓝—-灯以后下载很快。用完赶紧卸载,哈哈哈
蓝—-灯 是啥?翻墙软件?
Jul 30 00:48:19 manager kubelet[34142]: E0730 00:48:16.042194 34142 kubelet.go:2248] node “manager” not found
Jul 30 00:48:19 manager kubelet[34142]: E0730 00:48:16.181782 34142 kubelet.go:2248] node “manager” not found
Jul 30 00:48:19 manager kubelet[34142]: E0730 00:48:16.288944 34142 kubelet.go:2248] node “manager” not found
Jul 30 00:48:19 manager kubelet[34142]: E0730 00:48:16.396633 34142 kubelet.go:2248] node “manager” not found
Jul 30 00:48:19 manager kubelet[34142]: E0730 00:48:16.539847 34142 kubelet.go:2248] node “manager” not found
Jul 30 00:48:26 manager kubelet[34142]: E0730 00:48:25.180278 34142 kubelet.go:2248] node “manager” not found
Jul 30 00:48:26 manager kubelet[34142]: I0730 00:48:26.789531 34142 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Jul 30 00:48:39 manager kubelet[34142]: W0730 00:48:26.804054 34142 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 30 00:48:55 manager kubelet[34142]: E0730 00:48:55.123572 34142 kubelet.go:2248] node “manager” not found
Jul 30 00:48:55 manager kubelet[34142]: I0730 00:48:55.382977 34142 kubelet.go:1822] skipping pod synchronization – container runtime is down
Jul 30 00:48:56 manager kubelet[34142]: I0730 00:48:56.103914 34142 trace.go:81] Trace[1565184797]: “Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch” (started: 2019-07-30 00:48:17.50069
Jul 30 00:48:56 manager kubelet[34142]: Trace[1565184797]: [38.600411681s] [38.600411681s] END
Jul 30 00:48:56 manager kubelet[34142]: E0730 00:48:56.103964 34142 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get https://192.168.27.148:6443/apis/s
Jul 30 00:48:56 manager kubelet[34142]: I0730 00:48:56.121326 34142 trace.go:81] Trace[369110714]: “Reflector k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47 ListAndWatch” (started: 2019-07-30 00:4
Jul 30 00:48:56 manager kubelet[34142]: Trace[369110714]: [39.804616285s] [39.804616285s] END
Jul 30 00:48:56 manager kubelet[34142]: E0730 00:48:56.121341 34142 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.27.148:6443/api/v1
Jul 30 00:48:56 manager kubelet[34142]: E0730 00:48:56.164565 34142 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node “manager” not found
Jul 30 00:48:56 manager kubelet[34142]: I0730 00:48:56.189272 34142 kubelet.go:1822] skipping pod synchronization – container runtime is down
Jul 30 00:48:56 manager kubelet[34142]: I0730 00:48:56.198972 34142 trace.go:81] Trace[1025692492]: “Reflector k8s.io/kubernetes/pkg/kubelet/kubelet.go:444 ListAndWatch” (started: 2019-07-30 00:48:16.35
Jul 30 00:48:56 manager kubelet[34142]: Trace[1025692492]: [39.843622402s] [39.843622402s] END
Jul 30 00:48:56 manager kubelet[34142]: E0730 00:48:56.199012 34142 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168.27.148:6443/api/v1/ser
Jul 30 00:48:56 manager kubelet[34142]: I0730 00:48:56.208236 34142 trace.go:81] Trace[1280543379]: “Reflector k8s.io/kubernetes/pkg/kubelet/kubelet.go:453 ListAndWatch” (started: 2019-07-30 00:48:16.44
Jul 30 00:48:56 manager kubelet[34142]: Trace[1280543379]: [39.762525993s] [39.762525993s] END
Jul 30 00:48:56 manager kubelet[34142]: E0730 00:48:56.208262 34142 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://192.168.27.148:6443/api/v1/nodes?
Jul 30 00:48:56 manager kubelet[34142]: E0730 00:48:56.215251 34142 controller.go:115] failed to ensure node lease exists, will retry in 3.2s, error: Get https://192.168.27.148:6443/apis/coordination.k8
Jul 30 00:48:56 manager kubelet[34142]: I0730 00:48:56.215415 34142 trace.go:81] Trace[260520125]: “Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch” (started: 2019-07-30 00:48:16.915670
Jul 30 00:48:56 manager kubelet[34142]: Trace[260520125]: [39.299726305s] [39.299726305s] END
Jul 30 00:48:56 manager kubelet[34142]: E0730 00:48:56.215423 34142 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://192.168.27.148:6443/api
Jul 30 00:48:56 manager kubelet[34142]: E0730 00:48:56.240718 34142 certificate_manager.go:400] Failed while requesting a signed certificate from the master: cannot create certificate signing request: P
Jul 30 00:48:56 manager kubelet[34142]: E0730 00:48:56.240866 34142 certificate_manager.go:290] Reached backoff limit, still unable to rotate certs: timed out waiting for the condition
Jul 30 00:48:56 manager kubelet[34142]: E0730 00:48:56.282037 34142 event.go:249] Unable to write event: ‘Post https://192.168.27.148:6443/api/v1/namespaces/default/events: dial tcp 192.168.27.148:6443:
Jul 30 00:48:56 manager kubelet[34142]: E0730 00:48:56.282228 34142 kubelet.go:2248] node “manager” not found
Jul 30 00:48:56 manager kubelet[34142]: W0730 00:48:56.384356 34142 docker_container.go:224] Deleted previously existing symlink file: “/var/log/pods/kube-system_kube-controller-manager-manager_ba945724
Jul 30 00:48:56 manager kubelet[34142]: W0730 00:48:56.385347 34142 docker_container.go:224] Deleted previously existing symlink file: “/var/log/pods/kube-system_kube-scheduler-manager_ecae9d12d36101923
Jul 30 00:48:56 manager kubelet[34142]: W0730 00:48:56.385387 34142 docker_container.go:224] Deleted previously existing symlink file: “/var/log/pods/kube-system_etcd-manager_bc729747916b8d057cba709e879
Jul 30 00:48:56 manager kubelet[34142]: E0730 00:48:56.385435 34142 kubelet.go:2248] node “manager” not found
Jul 30 00:48:56 manager kubelet[34142]: I0730 00:48:56.391202 34142 kubelet.go:1822] skipping pod synchronization – container runtime is down
Jul 30 00:48:56 manager kubelet[34142]: E0730 00:48:56.438461 34142 kubelet.go:2169] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin i
Jul 30 00:48:56 manager kubelet[34142]: I0730 00:48:56.463873 34142 kubelet_node_status.go:72] Attempting to register node manager
Jul 30 00:48:56 manager kubelet[34142]: E0730 00:48:56.476199 34142 kubelet_node_status.go:94] Unable to register node “manager” with API server: Post https://192.168.27.148:6443/api/v1/nodes: dial tcp
Jul 30 00:48:56 manager kubelet[34142]: E0730 00:48:56.486095 34142 kubelet.go:2248] node “manager” not found
Jul 30 00:48:56 manager kubelet[34142]: W0730 00:48:56.491868 34142 container.go:409] Failed to create summary reader for “/kubepods/burstable/pod0b081ec6922d94c5ae31d145d6ffa124/0ee0a9ca8f676dbe468ce93
Jul 30 00:48:56 manager kubelet[34142]: E0730 00:48:56.586344 34142 kubelet.go:2248] node “manager” not found
—————————————————–
请问我这个报错有人遇到吗,网上找了很久都没有
机器是搬瓦工的,应该不存在墙的问题,初始化这一步一直报错:
[kubelet-check] It seems like the kubelet isn’t running or healthy.
[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
– The kubelet is not running
– The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
Do you suspect your spouse of cheating, are you being overly paranoid or seeing signs of infidelity…Then he sure is cheating: I was in that exact same position when I met Henry through my best friend James who helped me hack into my boyfriend’s phone, it was like a miracle when he helped me clone my boyfriend’s phone and I got first-hand information from his phone. Now I get all his incoming and outgoing text messages, emails, call logs, web browsing history, photos and videos, instant messengers(facebook, whatsapp, bbm, IG etc) , GPS locations, phone taps to get live transmissions on all phone conversations. if you need help contact his gmail on, , Henryclarkethicalhacker@gmail.com, and you can also text, call him, whatsappn on +18134211326.