kubeadm源码分析(内含kubernetes离线包,三步安装)

k8s离线安装包 三步安装,简单到难以置信

kubeadm源码分析

说句实在话,kubeadm的代码写的真心一般,质量不是很高。

几个关键点来先说一下kubeadm干的几个核心的事:

  • kubeadm 生成证书在/etc/kubernetes/pki目录下
  • kubeadm 生成static pod yaml配置,全部在/etc/kubernetes/manifasts下
  • kubeadm 生成kubelet配置,kubectl配置等 在/etc/kubernetes下
  • kubeadm 通过client go去启动dns

kubeadm init

代码入口 cmd/kubeadm/app/cmd/init.go 建议大家去看看cobra

找到Run函数来分析下主要流程:

  1. 如果证书不存在,就创建证书,所以如果我们有自己的证书可以把它放在/etc/kubernetes/pki下即可, 下文细看如果生成证书
  1. if res, _ := certsphase.UsingExternalCA(i.cfg); !res {
  2. if err := certsphase.CreatePKIAssets(i.cfg); err != nil {
  3. return err
  4. }
  1. 创建kubeconfig文件
  1. if err := kubeconfigphase.CreateInitKubeConfigFiles(kubeConfigDir, i.cfg); err != nil {
  2. return err
  3. }
  1. 创建manifest文件,etcd apiserver manager scheduler都在这里创建, 可以看到如果你的配置文件里已经写了etcd的地址了,就不创建了,这我们就可以自己装etcd集群,而不用默认单点的etcd,很有用
  1. controlplanephase.CreateInitStaticPodManifestFiles(manifestDir, i.cfg);
  2. if len(i.cfg.Etcd.Endpoints) == 0 {
  3. if err := etcdphase.CreateLocalEtcdStaticPodManifestFile(manifestDir, i.cfg); err != nil {
  4. return fmt.Errorf(“error creating local etcd static pod manifest file: %v”, err)
  5. }
  6. }
  1. 等待APIserver和kubelet启动成功,这里就会遇到我们经常遇到的镜像拉不下来的错误,其实有时kubelet因为别的原因也会报这个错,让人误以为是镜像弄不下来
  1. if err := waitForAPIAndKubelet(waiter); err != nil {
  2. ctx := map[string]string{
  3. “Error”: fmt.Sprintf(“%v”, err),
  4. “APIServerImage”: images.GetCoreImage(kubeadmconstants.KubeAPIServer, i.cfg.GetControlPlaneImageRepository(), i.cfg.KubernetesVersion, i.cfg.UnifiedControlPlaneImage),
  5. “ControllerManagerImage”: images.GetCoreImage(kubeadmconstants.KubeControllerManager, i.cfg.GetControlPlaneImageRepository(), i.cfg.KubernetesVersion, i.cfg.UnifiedControlPlaneImage),
  6. “SchedulerImage”: images.GetCoreImage(kubeadmconstants.KubeScheduler, i.cfg.GetControlPlaneImageRepository(), i.cfg.KubernetesVersion, i.cfg.UnifiedControlPlaneImage),
  7. }
  8. kubeletFailTempl.Execute(out, ctx)
  9. return fmt.Errorf(“couldn’t initialize a Kubernetes cluster”)
  10. }
  1. 给master加标签,加污点, 所以想要pod调度到master上可以把污点清除了
  1. if err := markmasterphase.MarkMaster(client, i.cfg.NodeName); err != nil {
  2. return fmt.Errorf(“error marking master: %v”, err)
  3. }
  1. 生成tocken
  1. if err := nodebootstraptokenphase.UpdateOrCreateToken(client, i.cfg.Token, false, i.cfg.TokenTTL.Duration, kubeadmconstants.DefaultTokenUsages, []string{kubeadmconstants.NodeBootstrapTokenAuthGroup}, tokenDescription); err != nil {
  2. return fmt.Errorf(“error updating or creating token: %v”, err)
  3. }
  1. 调用clientgo创建dns和kube-proxy
  1. if err := dnsaddonphase.EnsureDNSAddon(i.cfg, client); err != nil {
  2. return fmt.Errorf(“error ensuring dns addon: %v”, err)
  3. }
  4. if err := proxyaddonphase.EnsureProxyAddon(i.cfg, client); err != nil {
  5. return fmt.Errorf(“error ensuring proxy addon: %v”, err)
  6. }

笔者批判代码无脑式的一个流程到底,要是笔者操刀定抽象成接口 RenderConf Save Run Clean等,DNS kube-porxy以及其它组件去实现,然后问题就是没把dns和kubeproxy的配置渲染出来,可能是它们不是static pod的原因, 然后就是join时的bug下文提到

证书生成

循环的调用了这一坨函数,我们只需要看其中一两个即可,其它的都差不多

  1. certActions := []func(cfg *kubeadmapi.MasterConfiguration) error{
  2. CreateCACertAndKeyfiles,
  3. CreateAPIServerCertAndKeyFiles,
  4. CreateAPIServerKubeletClientCertAndKeyFiles,
  5. CreateServiceAccountKeyAndPublicKeyFiles,
  6. CreateFrontProxyCACertAndKeyFiles,
  7. CreateFrontProxyClientCertAndKeyFiles,
  8. }

根证书生成:

  1. //返回了根证书的公钥和私钥
  2. func NewCACertAndKey() (*x509.Certificate, *rsa.PrivateKey, error) {
  3. caCert, caKey, err := pkiutil.NewCertificateAuthority()
  4. if err != nil {
  5. return nil, nil, fmt.Errorf(“failure while generating CA certificate and key: %v”, err)
  6. }
  7. return caCert, caKey, nil
  8. }

k8s.io/client-go/util/cert 这个库里面有两个函数,一个生成key的一个生成cert的:

  1. key, err := certutil.NewPrivateKey()
  2. config := certutil.Config{
  3. CommonName: “kubernetes”,
  4. }
  5. cert, err := certutil.NewSelfSignedCACert(config, key)

config里面我们也可以填充一些别的证书信息:

  1. type Config struct {
  2. CommonName string
  3. Organization []string
  4. AltNames AltNames
  5. Usages []x509.ExtKeyUsage
  6. }

私钥就是封装了rsa库里面的函数:

  1. “crypto/rsa”
  2. “crypto/x509”
  3. func NewPrivateKey() (*rsa.PrivateKey, error) {
  4. return rsa.GenerateKey(cryptorand.Reader, rsaKeySize)
  5. }

自签证书,所以根证书里只有CommonName信息,Organization相当于没设置:

  1. func NewSelfSignedCACert(cfg Config, key *rsa.PrivateKey) (*x509.Certificate, error) {
  2. now := time.Now()
  3. tmpl := x509.Certificate{
  4. SerialNumber: new(big.Int).SetInt64(0),
  5. Subject: pkix.Name{
  6. CommonName: cfg.CommonName,
  7. Organization: cfg.Organization,
  8. },
  9. NotBefore: now.UTC(),
  10. NotAfter: now.Add(duration365d * 10).UTC(),
  11. KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
  12. BasicConstraintsValid: true,
  13. IsCA: true,
  14. }
  15. certDERBytes, err := x509.CreateCertificate(cryptorand.Reader, &tmpl, &tmpl, key.Public(), key)
  16. if err != nil {
  17. return nil, err
  18. }
  19. return x509.ParseCertificate(certDERBytes)
  20. }

生成好之后把之写入文件:

  1. pkiutil.WriteCertAndKey(pkiDir, baseName, cert, key);
  2. certutil.WriteCert(certificatePath, certutil.EncodeCertPEM(cert))

这里调用了pem库进行了编码

  1. encoding/pem
  2. func EncodeCertPEM(cert *x509.Certificate) []byte {
  3. block := pem.Block{
  4. Type: CertificateBlockType,
  5. Bytes: cert.Raw,
  6. }
  7. return pem.EncodeToMemory(&block)
  8. }

然后我们看apiserver的证书生成:

  1. caCert, caKey, err := loadCertificateAuthorithy(cfg.CertificatesDir, kubeadmconstants.CACertAndKeyBaseName)
  2. //从根证书生成apiserver证书
  3. apiCert, apiKey, err := NewAPIServerCertAndKey(cfg, caCert, caKey)

这时需要关注AltNames了比较重要,所有需要访问master的地址域名都得加进去,对应配置文件中apiServerCertSANs字段,其它东西与根证书无差别

  1. config := certutil.Config{
  2. CommonName: kubeadmconstants.APIServerCertCommonName,
  3. AltNames: *altNames,
  4. Usages: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
  5. }

创建k8s配置文件

可以看到创建了这些文件

  1. return createKubeConfigFiles(
  2. outDir,
  3. cfg,
  4. kubeadmconstants.AdminKubeConfigFileName,
  5. kubeadmconstants.KubeletKubeConfigFileName,
  6. kubeadmconstants.ControllerManagerKubeConfigFileName,
  7. kubeadmconstants.SchedulerKubeConfigFileName,
  8. )

k8s封装了两个渲染配置的函数: 区别是你的kubeconfig文件里会不会产生token,比如你进入dashboard需要一个token,或者你调用api需要一个token那么请生成带token的配置 生成的conf文件基本一直只是比如ClientName这些东西不同,所以加密后的证书也不同,ClientName会被加密到证书里,然后k8s取出来当用户使用

所以重点来了,我们做多租户时也要这样去生成。然后给该租户绑定角色。

  1. return kubeconfigutil.CreateWithToken(
  2. spec.APIServer,
  3. “kubernetes”,
  4. spec.ClientName,
  5. certutil.EncodeCertPEM(spec.CACert),
  6. spec.TokenAuth.Token,
  7. ), nil
  8. return kubeconfigutil.CreateWithCerts(
  9. spec.APIServer,
  10. “kubernetes”,
  11. spec.ClientName,
  12. certutil.EncodeCertPEM(spec.CACert),
  13. certutil.EncodePrivateKeyPEM(clientKey),
  14. certutil.EncodeCertPEM(clientCert),
  15. ), nil

然后就是填充Config结构体喽, 最后写到文件里,略

  1. “k8s.io/client-go/tools/clientcmd/api
  2. return &clientcmdapi.Config{
  3. Clusters: map[string]*clientcmdapi.Cluster{
  4. clusterName: {
  5. Server: serverURL,
  6. CertificateAuthorityData: caCert,
  7. },
  8. },
  9. Contexts: map[string]*clientcmdapi.Context{
  10. contextName: {
  11. Cluster: clusterName,
  12. AuthInfo: userName,
  13. },
  14. },
  15. AuthInfos: map[string]*clientcmdapi.AuthInfo{},
  16. CurrentContext: contextName,
  17. }

创建static pod yaml文件

这里返回了apiserver manager scheduler的pod结构体,

  1. specs := GetStaticPodSpecs(cfg, k8sVersion)
  2. staticPodSpecs := map[string]v1.Pod{
  3. kubeadmconstants.KubeAPIServer: staticpodutil.ComponentPod(v1.Container{
  4. Name: kubeadmconstants.KubeAPIServer,
  5. Image: images.GetCoreImage(kubeadmconstants.KubeAPIServer, cfg.GetControlPlaneImageRepository(), cfg.KubernetesVersion, cfg.UnifiedControlPlaneImage),
  6. Command: getAPIServerCommand(cfg, k8sVersion),
  7. VolumeMounts: staticpodutil.VolumeMountMapToSlice(mounts.GetVolumeMounts(kubeadmconstants.KubeAPIServer)),
  8. LivenessProbe: staticpodutil.ComponentProbe(cfg, kubeadmconstants.KubeAPIServer, int(cfg.API.BindPort), “/healthz”, v1.URISchemeHTTPS),
  9. Resources: staticpodutil.ComponentResources(“250m”),
  10. Env: getProxyEnvVars(),
  11. }, mounts.GetVolumes(kubeadmconstants.KubeAPIServer)),
  12. kubeadmconstants.KubeControllerManager: staticpodutil.ComponentPod(v1.Container{
  13. Name: kubeadmconstants.KubeControllerManager,
  14. Image: images.GetCoreImage(kubeadmconstants.KubeControllerManager, cfg.GetControlPlaneImageRepository(), cfg.KubernetesVersion, cfg.UnifiedControlPlaneImage),
  15. Command: getControllerManagerCommand(cfg, k8sVersion),
  16. VolumeMounts: staticpodutil.VolumeMountMapToSlice(mounts.GetVolumeMounts(kubeadmconstants.KubeControllerManager)),
  17. LivenessProbe: staticpodutil.ComponentProbe(cfg, kubeadmconstants.KubeControllerManager, 10252, “/healthz”, v1.URISchemeHTTP),
  18. Resources: staticpodutil.ComponentResources(“200m”),
  19. Env: getProxyEnvVars(),
  20. }, mounts.GetVolumes(kubeadmconstants.KubeControllerManager)),
  21. kubeadmconstants.KubeScheduler: staticpodutil.ComponentPod(v1.Container{
  22. Name: kubeadmconstants.KubeScheduler,
  23. Image: images.GetCoreImage(kubeadmconstants.KubeScheduler, cfg.GetControlPlaneImageRepository(), cfg.KubernetesVersion, cfg.UnifiedControlPlaneImage),
  24. Command: getSchedulerCommand(cfg),
  25. VolumeMounts: staticpodutil.VolumeMountMapToSlice(mounts.GetVolumeMounts(kubeadmconstants.KubeScheduler)),
  26. LivenessProbe: staticpodutil.ComponentProbe(cfg, kubeadmconstants.KubeScheduler, 10251, “/healthz”, v1.URISchemeHTTP),
  27. Resources: staticpodutil.ComponentResources(“100m”),
  28. Env: getProxyEnvVars(),
  29. }, mounts.GetVolumes(kubeadmconstants.KubeScheduler)),
  30. }
  31. //获取特定版本的镜像
  32. func GetCoreImage(image, repoPrefix, k8sVersion, overrideImage string) string {
  33. if overrideImage != “” {
  34. return overrideImage
  35. }
  36. kubernetesImageTag := kubeadmutil.KubernetesVersionToImageTag(k8sVersion)
  37. etcdImageTag := constants.DefaultEtcdVersion
  38. etcdImageVersion, err := constants.EtcdSupportedVersion(k8sVersion)
  39. if err == nil {
  40. etcdImageTag = etcdImageVersion.String()
  41. }
  42. return map[string]string{
  43. constants.Etcd: fmt.Sprintf(“%s/%s-%s:%s”, repoPrefix, “etcd”, runtime.GOARCH, etcdImageTag),
  44. constants.KubeAPIServer: fmt.Sprintf(“%s/%s-%s:%s”, repoPrefix, “kube-apiserver”, runtime.GOARCH, kubernetesImageTag),
  45. constants.KubeControllerManager: fmt.Sprintf(“%s/%s-%s:%s”, repoPrefix, “kube-controller-manager”, runtime.GOARCH, kubernetesImageTag),
  46. constants.KubeScheduler: fmt.Sprintf(“%s/%s-%s:%s”, repoPrefix, “kube-scheduler”, runtime.GOARCH, kubernetesImageTag),
  47. }[image]
  48. }
  49. //然后就把这个pod写到文件里了,比较简单
  50. staticpodutil.WriteStaticPodToDisk(componentName, manifestDir, spec);

创建etcd的一样,不多废话

等待kubelet启动成功

这个错误非常容易遇到,看到这个基本就是kubelet没起来,我们需要检查:selinux swap 和Cgroup driver是不是一致 setenforce 0 && swapoff -a && systemctl restart kubelet如果不行请保证 kubelet的Cgroup driver与docker一致,docker info|grep Cg

  1. go func(errC chan error, waiter apiclient.Waiter) {
  2. // This goroutine can only make kubeadm init fail. If this check succeeds, it won’t do anything special
  3. if err := waiter.WaitForHealthyKubelet(40*time.Second, “http://localhost:10255/healthz”); err != nil {
  4. errC <- err
  5. }
  6. }(errorChan, waiter)
  7. go func(errC chan error, waiter apiclient.Waiter) {
  8. // This goroutine can only make kubeadm init fail. If this check succeeds, it won’t do anything special
  9. if err := waiter.WaitForHealthyKubelet(60*time.Second, “http://localhost:10255/healthz/syncloop”); err != nil {
  10. errC <- err
  11. }
  12. }(errorChan, waiter)

创建DNS和kubeproxy

我就是在此发现coreDNS的

  1. if features.Enabled(cfg.FeatureGates, features.CoreDNS) {
  2. return coreDNSAddon(cfg, client, k8sVersion)
  3. }
  4. return kubeDNSAddon(cfg, client, k8sVersion)

然后coreDNS的yaml配置模板直接是写在代码里的: /app/phases/addons/dns/manifests.go

  1. CoreDNSDeployment = `
  2. apiVersion: apps/v1beta2
  3. kind: Deployment
  4. metadata:
  5. name: coredns
  6. namespace: kube-system
  7. labels:
  8. k8s-app: kube-dns
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. k8s-app: kube-dns
  14. template:
  15. metadata:
  16. labels:
  17. k8s-app: kube-dns
  18. spec:
  19. serviceAccountName: coredns
  20. tolerations:
  21. – key: CriticalAddonsOnly
  22. operator: Exists
  23. – key: {{ .MasterTaintKey }}

然后渲染模板,最后调用k8sapi创建,这种创建方式可以学习一下,虽然有点拙劣,这地方写的远不如kubectl好

  1. coreDNSConfigMap := &v1.ConfigMap{}
  2. if err := kuberuntime.DecodeInto(legacyscheme.Codecs.UniversalDecoder(), configBytes, coreDNSConfigMap); err != nil {
  3. return fmt.Errorf(“unable to decode CoreDNS configmap %v”, err)
  4. }
  5. // Create the ConfigMap for CoreDNS or update it in case it already exists
  6. if err := apiclient.CreateOrUpdateConfigMap(client, coreDNSConfigMap); err != nil {
  7. return err
  8. }
  9. coreDNSClusterRoles := &rbac.ClusterRole{}
  10. if err := kuberuntime.DecodeInto(legacyscheme.Codecs.UniversalDecoder(), []byte(CoreDNSClusterRole), coreDNSClusterRoles); err != nil {
  11. return fmt.Errorf(“unable to decode CoreDNS clusterroles %v”, err)
  12. }

这里值得一提的是kubeproxy的configmap真应该把apiserver地址传入进来,允许自定义,因为做高可用时需要指定虚拟ip,得修改,很麻烦 kubeproxy大差不差,不说了,想改的话改: app/phases/addons/proxy/manifests.go

kubeadm join

kubeadm join比较简单,一句话就可以说清楚,获取cluster info, 创建kubeconfig,怎么创建的kubeinit里面已经说了。带上token让kubeadm有权限 可以拉取

  1. return https.RetrieveValidatedClusterInfo(cfg.DiscoveryFile)
  2. cluster info内容
  3. type Cluster struct {
  4. // LocationOfOrigin indicates where this object came from. It is used for round tripping config post-merge, but never serialized.
  5. LocationOfOrigin string
  6. // Server is the address of the kubernetes cluster (https://hostname:port).
  7. Server string `json:”server”`
  8. // InsecureSkipTLSVerify skips the validity check for the server’s certificate. This will make your HTTPS connections insecure.
  9. // +optional
  10. InsecureSkipTLSVerify bool `json:”insecure-skip-tls-verify,omitempty”`
  11. // CertificateAuthority is the path to a cert file for the certificate authority.
  12. // +optional
  13. CertificateAuthority string `json:”certificate-authority,omitempty”`
  14. // CertificateAuthorityData contains PEM-encoded certificate authority certificates. Overrides CertificateAuthority
  15. // +optional
  16. CertificateAuthorityData []byte `json:”certificate-authority-data,omitempty”`
  17. // Extensions holds additional information. This is useful for extenders so that reads and writes don’t clobber unknown fields
  18. // +optional
  19. Extensions map[string]runtime.Object `json:”extensions,omitempty”`
  20. }
  21. return kubeconfigutil.CreateWithToken(
  22. clusterinfo.Server,
  23. “kubernetes”,
  24. TokenUser,
  25. clusterinfo.CertificateAuthorityData,
  26. cfg.TLSBootstrapToken,
  27. ), nil

CreateWithToken上文提到了不再赘述,这样就能去生成kubelet配置文件了,然后把kubelet启动起来即可

kubeadm join的问题就是渲染配置时没有使用命令行传入的apiserver地址,而用clusterinfo里的地址,这不利于我们做高可用,可能我们传入一个虚拟ip,但是配置里还是apiser的地址 +++ author = “fanux” date = “2014-07-11T10:54:24+02:00” draft = false title = “kubeadm源码分析” tags = [“event”,”dotScale”,”sketchnote”] image = “images/2014/Jul/titledotscale.png” comments = true # set false to hide Disqus comments share = true # set false to share buttons menu = “” # set “main” to add this content to the main menu +++

k8s离线安装包 三步安装,简单到难以置信

kubeadm源码分析

说句实在话,kubeadm的代码写的真心一般,质量不是很高。

几个关键点来先说一下kubeadm干的几个核心的事:

  • kubeadm 生成证书在/etc/kubernetes/pki目录下
  • kubeadm 生成static pod yaml配置,全部在/etc/kubernetes/manifasts下
  • kubeadm 生成kubelet配置,kubectl配置等 在/etc/kubernetes下
  • kubeadm 通过client go去启动dns

kubeadm init

代码入口 cmd/kubeadm/app/cmd/init.go 建议大家去看看cobra

找到Run函数来分析下主要流程:

  1. 如果证书不存在,就创建证书,所以如果我们有自己的证书可以把它放在/etc/kubernetes/pki下即可, 下文细看如果生成证书
  1. if res, _ := certsphase.UsingExternalCA(i.cfg); !res {
  2. if err := certsphase.CreatePKIAssets(i.cfg); err != nil {
  3. return err
  4. }
  1. 创建kubeconfig文件
  1. if err := kubeconfigphase.CreateInitKubeConfigFiles(kubeConfigDir, i.cfg); err != nil {
  2. return err
  3. }
  1. 创建manifest文件,etcd apiserver manager scheduler都在这里创建, 可以看到如果你的配置文件里已经写了etcd的地址了,就不创建了,这我们就可以自己装etcd集群,而不用默认单点的etcd,很有用
  1. controlplanephase.CreateInitStaticPodManifestFiles(manifestDir, i.cfg);
  2. if len(i.cfg.Etcd.Endpoints) == 0 {
  3. if err := etcdphase.CreateLocalEtcdStaticPodManifestFile(manifestDir, i.cfg); err != nil {
  4. return fmt.Errorf(“error creating local etcd static pod manifest file: %v”, err)
  5. }
  6. }
  1. 等待APIserver和kubelet启动成功,这里就会遇到我们经常遇到的镜像拉不下来的错误,其实有时kubelet因为别的原因也会报这个错,让人误以为是镜像弄不下来
  1. if err := waitForAPIAndKubelet(waiter); err != nil {
  2. ctx := map[string]string{
  3. “Error”: fmt.Sprintf(“%v”, err),
  4. “APIServerImage”: images.GetCoreImage(kubeadmconstants.KubeAPIServer, i.cfg.GetControlPlaneImageRepository(), i.cfg.KubernetesVersion, i.cfg.UnifiedControlPlaneImage),
  5. “ControllerManagerImage”: images.GetCoreImage(kubeadmconstants.KubeControllerManager, i.cfg.GetControlPlaneImageRepository(), i.cfg.KubernetesVersion, i.cfg.UnifiedControlPlaneImage),
  6. “SchedulerImage”: images.GetCoreImage(kubeadmconstants.KubeScheduler, i.cfg.GetControlPlaneImageRepository(), i.cfg.KubernetesVersion, i.cfg.UnifiedControlPlaneImage),
  7. }
  8. kubeletFailTempl.Execute(out, ctx)
  9. return fmt.Errorf(“couldn’t initialize a Kubernetes cluster”)
  10. }
  1. 给master加标签,加污点, 所以想要pod调度到master上可以把污点清除了
  1. if err := markmasterphase.MarkMaster(client, i.cfg.NodeName); err != nil {
  2. return fmt.Errorf(“error marking master: %v”, err)
  3. }
  1. 生成tocken
  1. if err := nodebootstraptokenphase.UpdateOrCreateToken(client, i.cfg.Token, false, i.cfg.TokenTTL.Duration, kubeadmconstants.DefaultTokenUsages, []string{kubeadmconstants.NodeBootstrapTokenAuthGroup}, tokenDescription); err != nil {
  2. return fmt.Errorf(“error updating or creating token: %v”, err)
  3. }
  1. 调用clientgo创建dns和kube-proxy
  1. if err := dnsaddonphase.EnsureDNSAddon(i.cfg, client); err != nil {
  2. return fmt.Errorf(“error ensuring dns addon: %v”, err)
  3. }
  4. if err := proxyaddonphase.EnsureProxyAddon(i.cfg, client); err != nil {
  5. return fmt.Errorf(“error ensuring proxy addon: %v”, err)
  6. }

笔者批判代码无脑式的一个流程到底,要是笔者操刀定抽象成接口 RenderConf Save Run Clean等,DNS kube-porxy以及其它组件去实现,然后问题就是没把dns和kubeproxy的配置渲染出来,可能是它们不是static pod的原因, 然后就是join时的bug下文提到

证书生成

循环的调用了这一坨函数,我们只需要看其中一两个即可,其它的都差不多

  1. certActions := []func(cfg *kubeadmapi.MasterConfiguration) error{
  2. CreateCACertAndKeyfiles,
  3. CreateAPIServerCertAndKeyFiles,
  4. CreateAPIServerKubeletClientCertAndKeyFiles,
  5. CreateServiceAccountKeyAndPublicKeyFiles,
  6. CreateFrontProxyCACertAndKeyFiles,
  7. CreateFrontProxyClientCertAndKeyFiles,
  8. }

根证书生成:

  1. //返回了根证书的公钥和私钥
  2. func NewCACertAndKey() (*x509.Certificate, *rsa.PrivateKey, error) {
  3. caCert, caKey, err := pkiutil.NewCertificateAuthority()
  4. if err != nil {
  5. return nil, nil, fmt.Errorf(“failure while generating CA certificate and key: %v”, err)
  6. }
  7. return caCert, caKey, nil
  8. }

k8s.io/client-go/util/cert 这个库里面有两个函数,一个生成key的一个生成cert的:

  1. key, err := certutil.NewPrivateKey()
  2. config := certutil.Config{
  3. CommonName: “kubernetes”,
  4. }
  5. cert, err := certutil.NewSelfSignedCACert(config, key)

config里面我们也可以填充一些别的证书信息:

  1. type Config struct {
  2. CommonName string
  3. Organization []string
  4. AltNames AltNames
  5. Usages []x509.ExtKeyUsage
  6. }

私钥就是封装了rsa库里面的函数:

  1. “crypto/rsa”
  2. “crypto/x509”
  3. func NewPrivateKey() (*rsa.PrivateKey, error) {
  4. return rsa.GenerateKey(cryptorand.Reader, rsaKeySize)
  5. }

自签证书,所以根证书里只有CommonName信息,Organization相当于没设置:

  1. func NewSelfSignedCACert(cfg Config, key *rsa.PrivateKey) (*x509.Certificate, error) {
  2. now := time.Now()
  3. tmpl := x509.Certificate{
  4. SerialNumber: new(big.Int).SetInt64(0),
  5. Subject: pkix.Name{
  6. CommonName: cfg.CommonName,
  7. Organization: cfg.Organization,
  8. },
  9. NotBefore: now.UTC(),
  10. NotAfter: now.Add(duration365d * 10).UTC(),
  11. KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
  12. BasicConstraintsValid: true,
  13. IsCA: true,
  14. }
  15. certDERBytes, err := x509.CreateCertificate(cryptorand.Reader, &tmpl, &tmpl, key.Public(), key)
  16. if err != nil {
  17. return nil, err
  18. }
  19. return x509.ParseCertificate(certDERBytes)
  20. }

生成好之后把之写入文件:

  1. pkiutil.WriteCertAndKey(pkiDir, baseName, cert, key);
  2. certutil.WriteCert(certificatePath, certutil.EncodeCertPEM(cert))

这里调用了pem库进行了编码

  1. encoding/pem
  2. func EncodeCertPEM(cert *x509.Certificate) []byte {
  3. block := pem.Block{
  4. Type: CertificateBlockType,
  5. Bytes: cert.Raw,
  6. }
  7. return pem.EncodeToMemory(&block)
  8. }

然后我们看apiserver的证书生成:

  1. caCert, caKey, err := loadCertificateAuthorithy(cfg.CertificatesDir, kubeadmconstants.CACertAndKeyBaseName)
  2. //从根证书生成apiserver证书
  3. apiCert, apiKey, err := NewAPIServerCertAndKey(cfg, caCert, caKey)

这时需要关注AltNames了比较重要,所有需要访问master的地址域名都得加进去,对应配置文件中apiServerCertSANs字段,其它东西与根证书无差别

  1. config := certutil.Config{
  2. CommonName: kubeadmconstants.APIServerCertCommonName,
  3. AltNames: *altNames,
  4. Usages: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
  5. }

创建k8s配置文件

可以看到创建了这些文件

  1. return createKubeConfigFiles(
  2. outDir,
  3. cfg,
  4. kubeadmconstants.AdminKubeConfigFileName,
  5. kubeadmconstants.KubeletKubeConfigFileName,
  6. kubeadmconstants.ControllerManagerKubeConfigFileName,
  7. kubeadmconstants.SchedulerKubeConfigFileName,
  8. )

k8s封装了两个渲染配置的函数: 区别是你的kubeconfig文件里会不会产生token,比如你进入dashboard需要一个token,或者你调用api需要一个token那么请生成带token的配置 生成的conf文件基本一直只是比如ClientName这些东西不同,所以加密后的证书也不同,ClientName会被加密到证书里,然后k8s取出来当用户使用

所以重点来了,我们做多租户时也要这样去生成。然后给该租户绑定角色。

  1. return kubeconfigutil.CreateWithToken(
  2. spec.APIServer,
  3. “kubernetes”,
  4. spec.ClientName,
  5. certutil.EncodeCertPEM(spec.CACert),
  6. spec.TokenAuth.Token,
  7. ), nil
  8. return kubeconfigutil.CreateWithCerts(
  9. spec.APIServer,
  10. “kubernetes”,
  11. spec.ClientName,
  12. certutil.EncodeCertPEM(spec.CACert),
  13. certutil.EncodePrivateKeyPEM(clientKey),
  14. certutil.EncodeCertPEM(clientCert),
  15. ), nil

然后就是填充Config结构体喽, 最后写到文件里,略

  1. “k8s.io/client-go/tools/clientcmd/api
  2. return &clientcmdapi.Config{
  3. Clusters: map[string]*clientcmdapi.Cluster{
  4. clusterName: {
  5. Server: serverURL,
  6. CertificateAuthorityData: caCert,
  7. },
  8. },
  9. Contexts: map[string]*clientcmdapi.Context{
  10. contextName: {
  11. Cluster: clusterName,
  12. AuthInfo: userName,
  13. },
  14. },
  15. AuthInfos: map[string]*clientcmdapi.AuthInfo{},
  16. CurrentContext: contextName,
  17. }

创建static pod yaml文件

这里返回了apiserver manager scheduler的pod结构体,

  1. specs := GetStaticPodSpecs(cfg, k8sVersion)
  2. staticPodSpecs := map[string]v1.Pod{
  3. kubeadmconstants.KubeAPIServer: staticpodutil.ComponentPod(v1.Container{
  4. Name: kubeadmconstants.KubeAPIServer,
  5. Image: images.GetCoreImage(kubeadmconstants.KubeAPIServer, cfg.GetControlPlaneImageRepository(), cfg.KubernetesVersion, cfg.UnifiedControlPlaneImage),
  6. Command: getAPIServerCommand(cfg, k8sVersion),
  7. VolumeMounts: staticpodutil.VolumeMountMapToSlice(mounts.GetVolumeMounts(kubeadmconstants.KubeAPIServer)),
  8. LivenessProbe: staticpodutil.ComponentProbe(cfg, kubeadmconstants.KubeAPIServer, int(cfg.API.BindPort), “/healthz”, v1.URISchemeHTTPS),
  9. Resources: staticpodutil.ComponentResources(“250m”),
  10. Env: getProxyEnvVars(),
  11. }, mounts.GetVolumes(kubeadmconstants.KubeAPIServer)),
  12. kubeadmconstants.KubeControllerManager: staticpodutil.ComponentPod(v1.Container{
  13. Name: kubeadmconstants.KubeControllerManager,
  14. Image: images.GetCoreImage(kubeadmconstants.KubeControllerManager, cfg.GetControlPlaneImageRepository(), cfg.KubernetesVersion, cfg.UnifiedControlPlaneImage),
  15. Command: getControllerManagerCommand(cfg, k8sVersion),
  16. VolumeMounts: staticpodutil.VolumeMountMapToSlice(mounts.GetVolumeMounts(kubeadmconstants.KubeControllerManager)),
  17. LivenessProbe: staticpodutil.ComponentProbe(cfg, kubeadmconstants.KubeControllerManager, 10252, “/healthz”, v1.URISchemeHTTP),
  18. Resources: staticpodutil.ComponentResources(“200m”),
  19. Env: getProxyEnvVars(),
  20. }, mounts.GetVolumes(kubeadmconstants.KubeControllerManager)),
  21. kubeadmconstants.KubeScheduler: staticpodutil.ComponentPod(v1.Container{
  22. Name: kubeadmconstants.KubeScheduler,
  23. Image: images.GetCoreImage(kubeadmconstants.KubeScheduler, cfg.GetControlPlaneImageRepository(), cfg.KubernetesVersion, cfg.UnifiedControlPlaneImage),
  24. Command: getSchedulerCommand(cfg),
  25. VolumeMounts: staticpodutil.VolumeMountMapToSlice(mounts.GetVolumeMounts(kubeadmconstants.KubeScheduler)),
  26. LivenessProbe: staticpodutil.ComponentProbe(cfg, kubeadmconstants.KubeScheduler, 10251, “/healthz”, v1.URISchemeHTTP),
  27. Resources: staticpodutil.ComponentResources(“100m”),
  28. Env: getProxyEnvVars(),
  29. }, mounts.GetVolumes(kubeadmconstants.KubeScheduler)),
  30. }
  31. //获取特定版本的镜像
  32. func GetCoreImage(image, repoPrefix, k8sVersion, overrideImage string) string {
  33. if overrideImage != “” {
  34. return overrideImage
  35. }
  36. kubernetesImageTag := kubeadmutil.KubernetesVersionToImageTag(k8sVersion)
  37. etcdImageTag := constants.DefaultEtcdVersion
  38. etcdImageVersion, err := constants.EtcdSupportedVersion(k8sVersion)
  39. if err == nil {
  40. etcdImageTag = etcdImageVersion.String()
  41. }
  42. return map[string]string{
  43. constants.Etcd: fmt.Sprintf(“%s/%s-%s:%s”, repoPrefix, “etcd”, runtime.GOARCH, etcdImageTag),
  44. constants.KubeAPIServer: fmt.Sprintf(“%s/%s-%s:%s”, repoPrefix, “kube-apiserver”, runtime.GOARCH, kubernetesImageTag),
  45. constants.KubeControllerManager: fmt.Sprintf(“%s/%s-%s:%s”, repoPrefix, “kube-controller-manager”, runtime.GOARCH, kubernetesImageTag),
  46. constants.KubeScheduler: fmt.Sprintf(“%s/%s-%s:%s”, repoPrefix, “kube-scheduler”, runtime.GOARCH, kubernetesImageTag),
  47. }[image]
  48. }
  49. //然后就把这个pod写到文件里了,比较简单
  50. staticpodutil.WriteStaticPodToDisk(componentName, manifestDir, spec);

创建etcd的一样,不多废话

等待kubelet启动成功

这个错误非常容易遇到,看到这个基本就是kubelet没起来,我们需要检查:selinux swap 和Cgroup driver是不是一致 setenforce 0 && swapoff -a && systemctl restart kubelet如果不行请保证 kubelet的Cgroup driver与docker一致,docker info|grep Cg

  1. go func(errC chan error, waiter apiclient.Waiter) {
  2. // This goroutine can only make kubeadm init fail. If this check succeeds, it won’t do anything special
  3. if err := waiter.WaitForHealthyKubelet(40*time.Second, “http://localhost:10255/healthz”); err != nil {
  4. errC <- err
  5. }
  6. }(errorChan, waiter)
  7. go func(errC chan error, waiter apiclient.Waiter) {
  8. // This goroutine can only make kubeadm init fail. If this check succeeds, it won’t do anything special
  9. if err := waiter.WaitForHealthyKubelet(60*time.Second, “http://localhost:10255/healthz/syncloop”); err != nil {
  10. errC <- err
  11. }
  12. }(errorChan, waiter)

创建DNS和kubeproxy

我就是在此发现coreDNS的

  1. if features.Enabled(cfg.FeatureGates, features.CoreDNS) {
  2. return coreDNSAddon(cfg, client, k8sVersion)
  3. }
  4. return kubeDNSAddon(cfg, client, k8sVersion)

然后coreDNS的yaml配置模板直接是写在代码里的: /app/phases/addons/dns/manifests.go

  1. CoreDNSDeployment = `
  2. apiVersion: apps/v1beta2
  3. kind: Deployment
  4. metadata:
  5. name: coredns
  6. namespace: kube-system
  7. labels:
  8. k8s-app: kube-dns
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. k8s-app: kube-dns
  14. template:
  15. metadata:
  16. labels:
  17. k8s-app: kube-dns
  18. spec:
  19. serviceAccountName: coredns
  20. tolerations:
  21. – key: CriticalAddonsOnly
  22. operator: Exists
  23. – key: {{ .MasterTaintKey }}

然后渲染模板,最后调用k8sapi创建,这种创建方式可以学习一下,虽然有点拙劣,这地方写的远不如kubectl好

  1. coreDNSConfigMap := &v1.ConfigMap{}
  2. if err := kuberuntime.DecodeInto(legacyscheme.Codecs.UniversalDecoder(), configBytes, coreDNSConfigMap); err != nil {
  3. return fmt.Errorf(“unable to decode CoreDNS configmap %v”, err)
  4. }
  5. // Create the ConfigMap for CoreDNS or update it in case it already exists
  6. if err := apiclient.CreateOrUpdateConfigMap(client, coreDNSConfigMap); err != nil {
  7. return err
  8. }
  9. coreDNSClusterRoles := &rbac.ClusterRole{}
  10. if err := kuberuntime.DecodeInto(legacyscheme.Codecs.UniversalDecoder(), []byte(CoreDNSClusterRole), coreDNSClusterRoles); err != nil {
  11. return fmt.Errorf(“unable to decode CoreDNS clusterroles %v”, err)
  12. }

这里值得一提的是kubeproxy的configmap真应该把apiserver地址传入进来,允许自定义,因为做高可用时需要指定虚拟ip,得修改,很麻烦 kubeproxy大差不差,不说了,想改的话改: app/phases/addons/proxy/manifests.go

kubeadm join

kubeadm join比较简单,一句话就可以说清楚,获取cluster info, 创建kubeconfig,怎么创建的kubeinit里面已经说了。带上token让kubeadm有权限 可以拉取

  1. return https.RetrieveValidatedClusterInfo(cfg.DiscoveryFile)
  2. cluster info内容
  3. type Cluster struct {
  4. // LocationOfOrigin indicates where this object came from. It is used for round tripping config post-merge, but never serialized.
  5. LocationOfOrigin string
  6. // Server is the address of the kubernetes cluster (https://hostname:port).
  7. Server string `json:”server”`
  8. // InsecureSkipTLSVerify skips the validity check for the server’s certificate. This will make your HTTPS connections insecure.
  9. // +optional
  10. InsecureSkipTLSVerify bool `json:”insecure-skip-tls-verify,omitempty”`
  11. // CertificateAuthority is the path to a cert file for the certificate authority.
  12. // +optional
  13. CertificateAuthority string `json:”certificate-authority,omitempty”`
  14. // CertificateAuthorityData contains PEM-encoded certificate authority certificates. Overrides CertificateAuthority
  15. // +optional
  16. CertificateAuthorityData []byte `json:”certificate-authority-data,omitempty”`
  17. // Extensions holds additional information. This is useful for extenders so that reads and writes don’t clobber unknown fields
  18. // +optional
  19. Extensions map[string]runtime.Object `json:”extensions,omitempty”`
  20. }
  21. return kubeconfigutil.CreateWithToken(
  22. clusterinfo.Server,
  23. “kubernetes”,
  24. TokenUser,
  25. clusterinfo.CertificateAuthorityData,
  26. cfg.TLSBootstrapToken,
  27. ), nil

CreateWithToken上文提到了不再赘述,这样就能去生成kubelet配置文件了,然后把kubelet启动起来即可

kubeadm join的问题就是渲染配置时没有使用命令行传入的apiserver地址,而用clusterinfo里的地址,这不利于我们做高可用,可能我们传入一个虚拟ip,但是配置里还是apiser的地址

K8S中文社区微信公众号

评论 2

登录后评论

立即登录  

  1. #2

    我们有什么资格批判别人的创造,17岁高中生开源的,大神觉得不合适可以去commit啊

    Andrew-lin6年前 (2018-09-14)
  2. #1
    胡说云原生

    排版这样没法看的;代码无高亮,没有出处标识,字体都一个大小。根本无法阅读。

    CloudGeek5年前 (2019-07-25)