kubernetes calico IPV6支持

1.说明

实际上IPV6和IPV4在在配置上没有太大差异,本次只在配置上做相关说明。由于公司的云环境还不支持IPV6,本次主要在虚拟机上完成。

  • 主机规划
名称 IPV4 IPV6
master 192.168.6.110 fd00::20c:29ff:fe9f:52be
node2 192.168.6.103 fd00::39df:8f1b:e228:d42
node3 192.168.6.113 fd00::20c:29ff:fead:d381
  • 网络规划
名称 协议
service-cluster-ip-range fd03::/120
service-node-port-range 30000-32767
cluster-cidr fd05::/120
cluster-dns fd05::2
node-cidr-mask-size 121

2.环境准备

高版本的VMware开启IPV6支持,同时设置IPV6的网络地址范围。

3.组件配置

1.对虚拟机的配置(三台操作)

#增加配置
[root@master ~]# vim /etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding=1
#使之生效
[root@master ~]# sysctl -p
[root@master ~]# vim /etc/sysconfig/network
#添加
NETWORKING_IPV6=yes
[root@master ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
#添加
IPV6INIT=yes
IPV6_AUTOCONF=yes
[root@master ~]# reboot
  1. 证书源文件中的配置。此处不会过多介绍安装。关键介绍下配置部分,安装可参考Kubernetes安装.
  • apiserver-csr.json
{
  "CN": "kube-apiserver",
  "hosts": [
      "10.254.0.1",
      "192.168.6.110",
      "192.168.6.112",
      "192.168.6.130",
      "192.168.6.113",
      // 主机
       "fd00::20c:29ff:fe9f:52be",
       "fd00::39df:8f1b:e228:d42",
       "fd00::20c:29ff:fead:d381",
       "127.0.0.1",
       "::1",
        "fd03::1",
        "fd05::1",
        "localhost",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
        ],
        "key": {
                "algo": "rsa",
                "size": 2048
        },
        "names": [{
                "C": "CN",
                "ST": "NanJing",
                "L": "NanJing",
                "O": "Kubernetes",
                "OU": "Kubernetes-manual"
        }]
}
  • etcd-csr.json 此处为了省事,我把IPV4和IPV6全部加上了。
{
  "CN": "etcd",
  "hosts": [
    "192.168.6.110",
    "192.168.6.112",
    "192.168.6.130",
    "192.168.6.113",
    "fd00::20c:29ff:fe9f:52be",
    "fd00::39df:8f1b:e228:d42",
    "fd00::20c:29ff:fead:d381",
    "127.0.0.1",
    "::1"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "NanJing",
      "L": "NanJing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ]
}
  • \color{red}{其他文件不涉及IP地址的不用更改}

3.Etcd的配置 实际上把IPV4换成IPV6即可,注意写法。

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
#User=etcd
ExecStart=/data/cloud/etcd/etcd \
--name=node1 \
--heartbeat-interval=500 \
--election-timeout=5000 \
--cert-file=/data/cloud/pki/etcd.pem \
--key-file=/data/cloud/pki/etcd-key.pem \
--trusted-ca-file=/data/cloud/pki/ca.pem \
--peer-cert-file=/data/cloud/pki/etcd.pem \
--peer-key-file=/data/cloud/pki/etcd-key.pem \
--peer-trusted-ca-file=/data/cloud/pki/ca.pem \
--initial-advertise-peer-urls=https://[fd00::20c:29ff:fe9f:52be]:2380 \
--listen-peer-urls=https://[fd00::20c:29ff:fe9f:52be]:2380 \
--listen-client-urls=https://[fd00::20c:29ff:fe9f:52be]:2379,http://[::1]:2379 \
--advertise-client-urls=https://[fd00::20c:29ff:fe9f:52be]:2379 \
--initial-cluster-token=kubernetes \
--initial-cluster=node1=https://[fd00::20c:29ff:fe9f:52be]:2380,node2=https://[fd00::39df:8f1b:e228:d42]:2380,node3=https://[fd00::20c:29ff:fead:d381]:2380 \
--initial-cluster-state=new \
--data-dir=/data/cloud/work/etcd

Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

5.apiserver的关键配置

 --bind-address=:: \ 相当于IPV4中的0.0.0.0
    --secure-port=6443 \
    --insecure-port=0 \ 禁用非安全端口
    --advertise-address=fd00::20c:29ff:fe9f:52be \
    --service-cluster-ip-range=fd03::/120 \ 设置serviceip的范围
    --service-node-port-range=30000-32767 \
    --etcd-servers=https://[fd00::20c:29ff:fe9f:52be]:2379,https://[fd00::20c:29ff:fe83:39c3]:2379,https://[fd00::20c:29ff:fead:d381]:2379 \
//其他部分略......

6.controller-manager的关键配置

ExecStart=/data/cloud/kubernetes/bin/kube-controller-manager \
  --bind-address=:: \
  --allocate-node-cidrs=true \
  --cluster-cidr=fd05::/120 \
  --node-cidr-mask-size=121 \ 此处除以一点要比上面的120大

6.scheduler的关键配置

ExecStart=/data/cloud/kubernetes/bin/kube-scheduler \
  --bind-address=:: \
  --leader-elect=true \
  --logtostderr=false \
  --kubeconfig=/data/cloud/pki/scheduler.conf \
  --log-dir=/data/cloud/work/kubernetes/kube-scheduler \
  --v=2

7.kubelet 的关键配置

ExecStart=/data/cloud/kubernetes/bin/kubelet \
  --fail-swap-on=false \
  --address=:: \
  --healthz-bind-address=:: \
  --hostname-override=node2 \
  --node-ip=fd00::39df:8f1b:e228:d42 \ //此处一定要加上,不然默认注册的是IPV4
  --pod-infra-container-image=k8s.gcr.io/pause:3.1 \
  --network-plugin=cni  --cni-bin-dir=/opt/cni/bin \
  --kubeconfig=/data/cloud/pki/kubelet.conf \
  --bootstrap-kubeconfig=/data/cloud/pki/bootstrap.conf \
  --pod-manifest-path=/data/cloud/kubernetes/manifests \
  --allow-privileged=true \
  --cluster-dns=fd05::2 \ //设置DNS的地址

8.kubelet 的关键配置

ExecStart=/data/cloud/kubernetes/bin/kube-proxy \
 --bind-address=:: \
 --hostname-override=node2 \
 --cluster-cidr=fd05::/120 \
 --kubeconfig=/data/cloud/pki/proxy.conf \
 --logtostderr=true \
 --log-dir=/data/cloudwork/kubernetes/kube-proxy \
 --v=2

9.docker的配置 /etc/docker/daemon.json

{
"insecure-registry":["0.0.0.0/0"],
"ipv6": true,
"fixed-cidr-v6": "2001:db8:1::/64",
"data-root": "/data/cloud/work/docker", //我一般会调整其默认的工作目录
"host":["unix:///var/run/docker.sock","tcp://:::2375"],
"log-level":"debug"
}

4.启动组件进行验证

  • 集群整体状况
    为了方便操作做了个别名
    alias kubectl=’kubectl –kubeconfig=/data/cloud/pki/admin.conf’
[root@node1 system]# kubectl get cs
NAME                 STATUS      MESSAGE                                                                                                                   ERROR
controller-manager   Healthy     ok                                                                                                                        
scheduler            Healthy     ok                                                                                                                        
etcd-0               Healthy     {"health":"true"}                                                                                                         
etcd-2               Healthy     {"health":"true"}                                                                                                         
etcd-1               Healthy     {"health":"true"}
  • 节点状况,此处可以看到INTERNAL-IP注册上来的是IPV6地址
[root@node1 system]# kubectl get no -owide  
NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP                EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
node2   Ready    <none>   36m   v1.13.0   fd00::39df:8f1b:e228:d42   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64       docker://18.9.3
node3   Ready    <none>   51m   v1.13.0   fd00::20c:29ff:fead:d381   <none>        CentOS Linux 7 (Core)   3.10.0-957.5.1.el7.x86_64   docker://18.9.2
  • kubernetes 分配的service ip
[root@node1 system]# kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   fd03::1      <none>        443/TCP   3h34m
  • calico的配置
//指定证书位置
kubeconfig_filepath: "/data/cloud/pki/admin.conf"
  • calico部署
[root@node1 calico]# kubectl apply -f calico.yaml
[root@node1 yaml]# kubectl get pod -n kube-system -owide 
NAME                READY   STATUS    RESTARTS   AGE    IP                         NODE    NOMINATED NODE   READINESS GATES
calico-node-mwkj5   1/1     Running   0          87m    fd00::39df:8f1b:e228:d42   node2   <none>           <none>
calico-node-vjhpb   1/1     Running   0          102m   fd00::20c:29ff:fead:d381   node3   <none>           <none>
  • 测试
[root@node1 yaml]# kubectl run tomcat  --image=tomcat:8.0  --replicas=2 --port=8080
[root@node1 yaml]# kubectl get pod -owide 
NAME                      READY   STATUS    RESTARTS   AGE   IP         NODE    NOMINATED NODE   READINESS GATES
tomcat-79d98465c6-jqvgp   1/1     Running   0          17s   fd05::b    node3   <none>           <none>
tomcat-79d98465c6-n4rgh   1/1     Running   0          17s   fd05::86   node2   <none>           <none>
//node2直接访问容器
[root@node2 images]# curl -6g  [fd05::86]:8080



<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="UTF-8" />
        <title>Apache Tomcat/8.5.38</title>
        <link href="favicon.ico" rel="icon" type="image/x-icon" />
        <link href="favicon.ico" rel="shortcut icon" type="image/x-icon" />
        <link href="tomcat.css" rel="stylesheet" type="text/css" />
    </head>

······
//node3直接访问容器
[root@node3 cloud]# curl -6g  [fd05::b]:8080



<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="UTF-8" />
        <title>Apache Tomcat/8.5.38</title>
        <link href="favicon.ico" rel="icon" type="image/x-icon" />
        <link href="favicon.ico" rel="shortcut icon" type="image/x-icon" />
        <link href="tomcat.css" rel="stylesheet" type="text/css" />
    </head>
······
//暴露端口
[root@node1 calico]# kubectl expose deployment tomcat  --port=8080 --target-port=8080 --type=NodePort 
service/myip exposed
[root@node1 yaml]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   fd03::1      <none>        443/TCP          6h38m
tomcat       NodePort    fd03::46     <none>        8080:31900/TCP   21m
  • 通过serviceIP+端口访问(8080),因为master未安装kube-proxy,在node2和node3上访问,为了省事此处截图。
  • 通过node+端口访问(31900)

作者:doublegao
原文链接:https://www.jianshu.com/p/e92dec9f9cf4

K8S中文社区微信公众号

评论 抢沙发

登录后评论

立即登录