本文档描述了如何在CentOS上使用kubeadm安装单个控制面板节点(主节点)的Kubernetes集群( v1.15 ),然后部署到一个外部OpenStack云厂商,并且使用Cinder CSI插件将Cinder存储卷用作Kubernetes中的持久卷。
OpenStack的准备工作
Kubernetes集群要在OpenStack VM上运行,因此让我们首先需要在OpenStack中创建一些东西。
- Kubernetes集群的项目/租户
- Kubernetes集群项目的用户,以查询节点信息和附加存储卷等
- 专用网络和子网
- 专用网络的路由器,并将其连接到公用网络以获取浮动IP
- 所有Kubernetes 虚拟机的安全组
- 一个虚拟机作为控制面板节点(主节点),几个虚拟机作为工作节点
安全组将具有以下规则来打开Kubernetes的端口。
控制面板节点(主节点)
协议 | 端口号 | 描述 |
---|---|---|
TCP协议 | 6443 | Kubernetes API Server |
TCP协议 | 2379-2380 | etcd server client API |
TCP协议 | 10250 | Kubelet API |
TCP协议 | 10251 | kube-scheduler |
TCP协议 | 10252 | kube-controller-manager |
TCP协议 | 10255 | 只读 Kubelet API |
工作节点
协议 | 端口号 | 描述 |
---|---|---|
TCP协议 | 10250 | Kubelet API |
TCP协议 | 10255 | Read-only Kubelet API |
TCP协议 | 30000-32767 | NodePort Services |
控制面板节点(主节点)和工作节点上的CNI端口
协议 | 端口号 | 描述 |
---|---|---|
TCP协议 | 179 | Calico BGP network |
TCP协议 | 9099 | Calico felix(健康检查) |
UDP协议 | 8285 | Flannel |
UDP协议 | 8472 | Flannel |
TCP协议 | 6781-6784 | Weave Net |
UDP协议 | 6783-6784 | Weave Net |
仅在使用特定的CNI插件时才需要打开CNI特定的端口。在本指南中,我们将使用Weave Net。在安全组中仅需要打开Weave Net端口(TCP 6781-6784和UDP 6783-6784)。
控制面板节点(主节点)至少需要2个内核和4GB RAM。启动虚拟机后,请验证其主机名,并确保其与Nova中的节点名相同。如果主机名不可解析,请将其添加到中/etc/hosts。
例如,如果虚拟机名为master1,并且它的内部IP是192.168.1.4。将其添加到/etc/hosts并将主机名设置为master1。
echo "192.168.1.4 master1" >> /etc/hosts hostnamectl set-hostname master1
安装Docker和Kubernetes
接下来,我们将按照官方文档使用kubeadm安装docker和Kubernetes。
按照容器运行时文档中的步骤安装Docker 。
请注意,最佳做法是将systemd用作Kubernetes 的cgroup驱动程序。如果你使用内部容器注册表,请将其添加到docker 配置中。
# Install Docker CE ## Set up the repository ### Install required packages. yum install yum-utils device-mapper-persistent-data lvm2 ### Add Docker repository. yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo ## Install Docker CE. yum update && yum install docker-ce-18.06.2.ce ## Create /etc/docker directory. mkdir /etc/docker # Configure the Docker daemon cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF mkdir -p /etc/systemd/system/docker.service.d # Restart Docker systemctl daemon-reload systemctl restart docker systemctl enable docker
按照Kubeadm安装文档中的步骤,安装kubeadm。
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF # Set SELinux in permissive mode (effectively disabling it) # Caveat: In a production environment you may not want to disable SELinux, please refer to Kubernetes documents about SELinux setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes systemctl enable --now kubelet cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system # check if br_netfilter module is loaded lsmod | grep br_netfilter # if not, load it explicitly with modprobe br_netfilter
根据官方文档创建单个控制面板(主节点)群集中找到,我们来创建单个控制面板(主节点)集群
我们将主要遵循该文档来创建集群,但还会为云厂商添加其他内容。为了使事情更清楚,对于控制面板节点(主节点)我们将使用一个kubeadm-config.yml配置文件。在此配置中,我们指定使用外部OpenStack云厂商,以及在何处找到其配置。**我们还在API服务器的运行时配置中启用了存储API,因此我们可以将OpenStack存储卷用作Kubernetes中的持久卷。
apiVersion: kubeadm.k8s.io/v1beta1 kind: InitConfiguration nodeRegistration: kubeletExtraArgs: cloud-provider: "external" --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: "v1.15.1" apiServer: extraArgs: enable-admission-plugins: NodeRestriction runtime-config: "storage.k8s.io/v1=true" controllerManager: extraArgs: external-cloud-volume-plugin: openstack extraVolumes: - name: "cloud-config" hostPath: "/etc/kubernetes/cloud-config" mountPath: "/etc/kubernetes/cloud-config" readOnly: true pathType: File networking: serviceSubnet: "10.96.0.0/12" podSubnet: "10.224.0.0/16" dnsDomain: "cluster.local"
现在,我们将为OpenStack 创建配置/etc/kubernetes/cloud-config。请注意,此处的租户是我们一开始为所有Kubernetes VM创建的租户。所有虚拟机都应在该项目/租户中启动。另外,你需要在此租户中创建一个用户,以便Kubernetes进行查询。 ca-file 是OpenStack API端点的CA根证书,目前,云厂商不允许不安全的连接(跳过CA检查),例如https://openstack.cloud:5000/v3 。
[Global] region=RegionOne username=username password=password auth-url=https://openstack.cloud:5000/v3 tenant-id=14ba698c0aec4fd6b7dc8c310f664009 domain-id=default ca-file=/etc/kubernetes/ca.pem [LoadBalancer] subnet-id=b4a9a292-ea48-4125-9fb2-8be2628cb7a1 floating-network-id=bc8a590a-5d65-4525-98f3-f7ef29c727d5 [BlockStorage] bs-version=v2 [Networking] public-network-name=public ipv6-support-disabled=false
接下来,我们来运行kubeadm以启动控制面板节点(主节点)
kubeadm init --config=kubeadm-config.yml
完成初始化后,将admin config复制到.kube路径下
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
在此阶段,控制面板节点(主节点)已创建但尚未就绪。所有节点都有污点node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule,正在等待由 cloud-controller-manager(云厂商控制器管理器)初始化。
# kubectl describe no master1 Name: master1 Roles: master ...... Taints: node-role.kubernetes.io/master:NoSchedule node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule node.kubernetes.io/not-ready:NoSchedule ......
现在,根据将控制器管理器与kubeadm结合使用文档,将OpenStack云控制器管理器部署到kubernetes集群中。
使用cloud-config为openstack云厂商创建一个密钥。
kubectl create secret -n kube-system generic cloud-config --from-literal=cloud.conf="$(cat /etc/kubernetes/cloud-config)" --dry-run -o yaml > cloud-config-secret.yaml kubectl apply -f cloud-config-secret.yaml
获取OpenStack API端点的CA证书,并将其放入/etc/kubernetes/ca.pem中。
创建RBAC资源。
kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-roles.yaml kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-role-bindings.yaml
我们将以DaemonSet而不是Pod的形式运行OpenStack云控制器管理器。该管理器仅在控制面板节点(主节点)上运行,因此,如果有多个控制面板节点(主节点),则将运行多个Pod以实现高可用性。创建如下的openstack-cloud-controller-manager-ds.yaml文件,然后应用它。
--- apiVersion: v1 kind: ServiceAccount metadata: name: cloud-controller-manager namespace: kube-system --- apiVersion: apps/v1 kind: DaemonSet metadata: name: openstack-cloud-controller-manager namespace: kube-system labels: k8s-app: openstack-cloud-controller-manager spec: selector: matchLabels: k8s-app: openstack-cloud-controller-manager updateStrategy: type: RollingUpdate template: metadata: labels: k8s-app: openstack-cloud-controller-manager spec: nodeSelector: node-role.kubernetes.io/master: "" securityContext: runAsUser: 1001 tolerations: - key: node.cloudprovider.kubernetes.io/uninitialized value: "true" effect: NoSchedule - key: node-role.kubernetes.io/master effect: NoSchedule - effect: NoSchedule key: node.kubernetes.io/not-ready serviceAccountName: cloud-controller-manager containers: - name: openstack-cloud-controller-manager image: docker.io/k8scloudprovider/openstack-cloud-controller-manager:v1.15.0 args: - /bin/openstack-cloud-controller-manager - --v=1 - --cloud-config=$(CLOUD_CONFIG) - --cloud-provider=openstack - --use-service-account-credentials=true - --address=127.0.0.1 volumeMounts: - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /etc/config name: cloud-config-volume readOnly: true - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec name: flexvolume-dir - mountPath: /etc/kubernetes name: ca-cert readOnly: true resources: requests: cpu: 200m env: - name: CLOUD_CONFIG value: /etc/config/cloud.conf hostNetwork: true volumes: - hostPath: path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec type: DirectoryOrCreate name: flexvolume-dir - hostPath: path: /etc/kubernetes/pki type: DirectoryOrCreate name: k8s-certs - hostPath: path: /etc/ssl/certs type: DirectoryOrCreate name: ca-certs - name: cloud-config-volume secret: secretName: cloud-config - name: ca-cert secret: secretName: openstack-ca-cert
controller manager 运行时,它将查询OpenStack以获取有关节点的信息并删除污点。在节点信息中,你将看到OpenStack中VM的UUID。
# kubectl describe no master1 Name: master1 Roles: master ...... Taints: node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoSchedule ...... sage:docker: network plugin is not ready: cni config uninitialized ...... PodCIDR: 10.224.0.0/24 ProviderID: openstack:///548e3c46-2477-4ce2-968b-3de1314560a5
现在安装你喜欢的CNI,控制面板节点(主节点)将准备就绪。
例如,要安装Weave Net,请运行以下命令:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
接下来,我们将设置工作节点。
首先,用在控制面板节点(主节点)中相同的方式,安装docker和kubeadm。要将它们加入集群,我们需要从控制面板节点(主节点)安装输出中获得令牌和ca cert 哈希值。如果它已过期或丢失,我们可以使用以下命令重新创建它。
# check if token is expired kubeadm token list # re-create token and show join command kubeadm token create --print-join-command
使用上述令牌和ca cert哈希值为工作节点创建kubeadm-config.yml配置文件。
apiVersion: kubeadm.k8s.io/v1beta2 discovery: bootstrapToken: apiServerEndpoint: 192.168.1.7:6443 token: 0c0z4p.dnafh6vnmouus569 caCertHashes: ["sha256:fcb3e956a6880c05fc9d09714424b827f57a6fdc8afc44497180905946527adf"] kind: JoinConfiguration nodeRegistration: kubeletExtraArgs: cloud-provider: "external"
apiServerEndpoint是控制面板节点(主节点),令牌和caCertHashes可从kubeadm token create命令的输出中打印的 join 命令中获取。
运行kubeadm,工作节点将加入集群。
kubeadm join --config kubeadm-config.yml
在这个阶段,我们将拥有一个在外部OpenStack云厂商运行的Kubernetes集群。厂商会告知Kubernetes,Kubernetes节点与OpenStack VM之间的映射。如果Kubernetes想要将持久卷附加到Pod,则可以从映射中找出Pod在哪个OpenStack VM上运行,并可以将底层OpenStack存储卷相应地附加到VM。
部署Cinder CSI
与Cinder的集成由外部Cinder CSI插件提供,正如Cinder CSI文档中所述。
我们将执行以下步骤来安装Cinder CSI插件。
首先,使用CA证书为OpenStack的API端点创建一个密钥。与我们在上面的云厂商中使用的证书文件相同。
kubectl create secret -n kube-system generic openstack-ca-cert --from-literal=ca.pem="$(cat /etc/kubernetes/ca.pem)" --dry-run -o yaml > openstack-ca-cert.yaml kubectl apply -f openstack-ca-cert.yaml
然后创建RBAC资源。
kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/release-1.15/manifests/cinder-csi-plugin/cinder-csi-controllerplugin-rbac.yaml kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/manifests/cinder-csi-plugin/cinder-csi-nodeplugin-rbac.yaml
Cinder CSI插件包括控制器插件和节点插件。控制器与Kubernetes API和Cinder API通信以 create/attach/detach/delete (创建/附加/分离/删除)Cinder卷。节点插件依次在每个工作节点上运行,以将存储设备(附加的卷)绑定到Pod,并在删除过程中取消绑定。创建cinder-csi-controllerplugin.yaml并应用它以创建csi控制器。
kind: Service apiVersion: v1 metadata: name: csi-cinder-controller-service namespace: kube-system labels: app: csi-cinder-controllerplugin spec: selector: app: csi-cinder-controllerplugin ports: - name: dummy port: 12345 --- kind: StatefulSet apiVersion: apps/v1 metadata: name: csi-cinder-controllerplugin namespace: kube-system spec: serviceName: "csi-cinder-controller-service" replicas: 1 selector: matchLabels: app: csi-cinder-controllerplugin template: metadata: labels: app: csi-cinder-controllerplugin spec: serviceAccount: csi-cinder-controller-sa containers: - name: csi-attacher image: quay.io/k8scsi/csi-attacher:v1.0.1 args: - "--v=5" - "--csi-address=$(ADDRESS)" env: - name: ADDRESS value: /var/lib/csi/sockets/pluginproxy/csi.sock imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /var/lib/csi/sockets/pluginproxy/ - name: csi-provisioner image: quay.io/k8scsi/csi-provisioner:v1.0.1 args: - "--provisioner=csi-cinderplugin" - "--csi-address=$(ADDRESS)" env: - name: ADDRESS value: /var/lib/csi/sockets/pluginproxy/csi.sock imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /var/lib/csi/sockets/pluginproxy/ - name: csi-snapshotter image: quay.io/k8scsi/csi-snapshotter:v1.0.1 args: - "--connection-timeout=15s" - "--csi-address=$(ADDRESS)" env: - name: ADDRESS value: /var/lib/csi/sockets/pluginproxy/csi.sock imagePullPolicy: Always volumeMounts: - mountPath: /var/lib/csi/sockets/pluginproxy/ name: socket-dir - name: cinder-csi-plugin image: docker.io/k8scloudprovider/cinder-csi-plugin:v1.15.0 args : - /bin/cinder-csi-plugin - "--v=5" - "--nodeid=$(NODE_ID)" - "--endpoint=$(CSI_ENDPOINT)" - "--cloud-config=$(CLOUD_CONFIG)" - "--cluster=$(CLUSTER_NAME)" env: - name: NODE_ID valueFrom: fieldRef: fieldPath: spec.nodeName - name: CSI_ENDPOINT value: unix://csi/csi.sock - name: CLOUD_CONFIG value: /etc/config/cloud.conf - name: CLUSTER_NAME value: kubernetes imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /csi - name: secret-cinderplugin mountPath: /etc/config readOnly: true - mountPath: /etc/kubernetes name: ca-cert readOnly: true volumes: - name: socket-dir hostPath: path: /var/lib/csi/sockets/pluginproxy/ type: DirectoryOrCreate - name: secret-cinderplugin secret: secretName: cloud-config - name: ca-cert secret: secretName: openstack-ca-cert
创建cinder-csi-nodeplugin.yaml并应用它来创建csi节点。
kind: DaemonSet apiVersion: apps/v1 metadata: name: csi-cinder-nodeplugin namespace: kube-system spec: selector: matchLabels: app: csi-cinder-nodeplugin template: metadata: labels: app: csi-cinder-nodeplugin spec: serviceAccount: csi-cinder-node-sa hostNetwork: true containers: - name: node-driver-registrar image: quay.io/k8scsi/csi-node-driver-registrar:v1.1.0 args: - "--v=5" - "--csi-address=$(ADDRESS)" - "--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)" lifecycle: preStop: exec: command: ["/bin/sh", "-c", "rm -rf /registration/cinder.csi.openstack.org /registration/cinder.csi.openstack.org-reg.sock"] env: - name: ADDRESS value: /csi/csi.sock - name: DRIVER_REG_SOCK_PATH value: /var/lib/kubelet/plugins/cinder.csi.openstack.org/csi.sock - name: KUBE_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /csi - name: registration-dir mountPath: /registration - name: cinder-csi-plugin securityContext: privileged: true capabilities: add: ["SYS_ADMIN"] allowPrivilegeEscalation: true image: docker.io/k8scloudprovider/cinder-csi-plugin:v1.15.0 args : - /bin/cinder-csi-plugin - "--nodeid=$(NODE_ID)" - "--endpoint=$(CSI_ENDPOINT)" - "--cloud-config=$(CLOUD_CONFIG)" env: - name: NODE_ID valueFrom: fieldRef: fieldPath: spec.nodeName - name: CSI_ENDPOINT value: unix://csi/csi.sock - name: CLOUD_CONFIG value: /etc/config/cloud.conf imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /csi - name: pods-mount-dir mountPath: /var/lib/kubelet/pods mountPropagation: "Bidirectional" - name: kubelet-dir mountPath: /var/lib/kubelet mountPropagation: "Bidirectional" - name: pods-cloud-data mountPath: /var/lib/cloud/data readOnly: true - name: pods-probe-dir mountPath: /dev mountPropagation: "HostToContainer" - name: secret-cinderplugin mountPath: /etc/config readOnly: true - mountPath: /etc/kubernetes name: ca-cert readOnly: true volumes: - name: socket-dir hostPath: path: /var/lib/kubelet/plugins/cinder.csi.openstack.org type: DirectoryOrCreate - name: registration-dir hostPath: path: /var/lib/kubelet/plugins_registry/ type: Directory - name: kubelet-dir hostPath: path: /var/lib/kubelet type: Directory - name: pods-mount-dir hostPath: path: /var/lib/kubelet/pods type: Directory - name: pods-cloud-data hostPath: path: /var/lib/cloud/data type: Directory - name: pods-probe-dir hostPath: path: /dev type: Directory - name: secret-cinderplugin secret: secretName: cloud-config - name: ca-cert secret: secretName: openstack-ca-cert
当它们都在运行时,为Cinder创建一个 storage class 。
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-sc-cinderplugin provisioner: csi-cinderplugin
然后,我们可以使用 storage class 创建PVC。
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myvol spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: csi-sc-cinderplugin
创建PVC时,将相应地创建一个Cinder卷。
# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myvol Bound pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad 1Gi RWO csi-sc-cinderplugin 3s
在OpenStack中,存储卷名称将与Kubernetes持久卷生成的名称匹配。在此示例中为:pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad
现在,我们可以使用PVC创建容器。
apiVersion: v1 kind: Pod metadata: name: web spec: containers: - name: web image: nginx ports: - name: web containerPort: 80 hostPort: 8081 protocol: TCP volumeMounts: - mountPath: "/usr/share/nginx/html" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: myvol
当pod运行时,该存储卷将绑定到pod上。如果回到OpenStack,我们可以看到Cinder存储卷已安装到运行Pod的工作节点上。
# openstack volume show 6b5f3296-b0eb-40cd-bd4f-2067a0d6287f +--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u'server_id': u'1c5e1439-edfa-40ed-91fe-2a0e12bc7eb4', u'attachment_id': u'11a15b30-5c24-41d4-86d9-d92823983a32', u'attached_at': u'2019-07-24T05:02:34.000000', u'host_name': u'compute-6', u'volume_id': u'6b5f3296-b0eb-40cd-bd4f-2067a0d6287f', u'device': u'/dev/vdb', u'id': u'6b5f3296-b0eb-40cd-bd4f-2067a0d6287f'}] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2019-07-24T05:02:18.000000 | | description | Created by OpenStack Cinder CSI driver | | encrypted | False | | id | 6b5f3296-b0eb-40cd-bd4f-2067a0d6287f | | migration_status | None | | multiattach | False | | name | pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad | | os-vol-host-attr:host | rbd:volumes@rbd#rbd | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 14ba698c0aec4fd6b7dc8c310f664009 | | properties | attached_mode='rw', cinder.csi.openstack.org/cluster='kubernetes' | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | in-use | | type | rbd | | updated_at | 2019-07-24T05:02:35.000000 | | user_id | 5f6a7a06f4e3456c890130d56babf591 | +--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
总结
在本演练中,我们在OpenStack VM上部署了一个Kubernetes集群,并与外部OpenStack云厂商集成。然后,在此Kubernetes集群上,我们部署了Cinder CSI插件,该插件可以创建Cinder存储卷,并将它们作为持久卷在Kubernetes中使用。
译文链接: https://kubernetes.io/blog/2020/02/07/deploying-external-openstack-cloud-provider-with-kubeadm/
登录后评论
立即登录 注册