1. 博客/

k8s1.14.6集群搭建之node节点部署

·470 字·3 分钟
Kubernetes
k8s1.14.6集群部署 - This article is part of a series.
Part 5: This Article

前言
#

在部署k8s node节点时,kubelet要与apiserver交互,在双方没有互信的情况下,如何通过master的认证鉴权?kubelet bootstrap就解决了这个问题,本文使用Bootstrap Token Authentication的认证方式,使得kubelet通过apiserver的认证、获取apiserver的CA证书,并成功注册到集群中

1. 创建kubelet bootstrap的kubeconfig
#

1.1 master节点192.168.18.142上执行create-bootstrap-kubeconfig.sh脚本

先创建bootstrap-token,再生成bootstrap.kubeconfig,再创建clusterrolebinding

$ cat create-bootstrap-kubeconfig.sh
KUBE_APISERVER="https://192.168.18.142:6443"
TOKEN_ID=$(openssl rand -hex 3)
TOKEN_SECRET=$(openssl rand -hex 8)
BOOTSTRAP_TOKEN="${TOKEN_ID}.${TOKEN_SECRET}"
AUTH_EXTRA_GROUPS="system:bootstrappers:default-node-token"

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: bootstrap-token-${TOKEN_ID}
  namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
  description: "The bootstrap token for k8s."
  token-id: ${TOKEN_ID}
  token-secret: ${TOKEN_SECRET}
  expiration: 2029-07-16T00:00:00Z
  usage-bootstrap-authentication: "true"
  usage-bootstrap-signing: "true"
  auth-extra-groups: ${AUTH_EXTRA_GROUPS}
EOF

kubectl config set-cluster kubernetes \
    --certificate-authority=/root/k8s-1.14.6/ssl/ca.pem \
    --embed-certs=true \
    --server=${KUBE_APISERVER} \
    --kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials tls-bootstrap-token-user \
    --token ${BOOTSTRAP_TOKEN} \
    --kubeconfig=bootstrap.kubeconfig
kubectl config set-context tls-bootstrap-token-user@kubernetes \
    --cluster kubernetes \
    --user tls-bootstrap-token-user \
    --kubeconfig=bootstrap.kubeconfig
kubectl config use-context tls-bootstrap-token-user@kubernetes --kubeconfig=bootstrap.kubeconfig

#将自定义的auth-extra-groups绑定角色system:node-bootstrapper
kubectl create clusterrolebinding kubelet-bootstrap \
    --clusterrole system:node-bootstrapper \
    --group ${AUTH_EXTRA_GROUPS}
#将自定义的auth-extra-groups绑定角色,实现自动签署证书请求
kubectl create clusterrolebinding node-autoapprove-bootstrap \
    --clusterrole system:certificates.k8s.io:certificatesigningrequests:nodeclient \
    --group ${AUTH_EXTRA_GROUPS}
#将system:node绑定角色,实现自动刷新node节点过期证书
kubectl create clusterrolebinding node-autoapprove-certificate-rotation \
    --clusterrole system:certificates.k8s.io:certificatesigningrequests:selfnodeclient \
    --group system:node
$ sh create-bootstrap-kubeconfig.sh
  • token的name必须是 bootstrap-token-<token-id> 的格式
  • token的type必须是 bootstrap.kubernetes.io/token
  • token的token-id和token-secret分别是6位和16位数字和字母的组合
  • auth-extra-groups 定义了token代表的用户所属的额外的group,而默认group名为system:bootstrappers
  • 这种类型token代表的用户名为 system:bootstrap:<token-id>
  • expiration字段定义token的过期时间

1.2 将生成的bootstrap.kubeconfig分发到计算节点192.168.18.160上

$ scp bootstrap.kubeconfig 192.168.18.160:/root/k8s-1.14.6/kubelet/

2. 计算节点配置并启动kubelet
#

$ swapoff -a
$ sed -i 's/.*swap.*/#&/' /etc/fstab
$ cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl --system

2.1 创建kubelet.service

$ cat <<EOF > /usr/lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/root/k8s-1.14.6/kubelet/bootstrap.kubeconfig --kubeconfig=/root/k8s-1.14.6/kubelet/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/root/k8s-1.14.6/kubelet/config.yaml"
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=/root/k8s-1.14.6/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_CONFIG_ARGS \$KUBELET_EXTRA_ARGS
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]	
WantedBy=multi-user.target
EOF

/root/k8s-1.14.6/kubelet/kubelet.conf是kubelet.service启动后生成的kubeconfig文件,用于访问apiserver

2.2 创建kubelet配置文件config.yaml、/etc/sysconfig/kubelet

$ cat config.yaml 
address: 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /root/k8s-1.14.6/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 172.16.0.10
clusterDomain: cluster.local
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kind: KubeletConfiguration
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusReportFrequency: 1m0s
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /root/k8s-1.14.6/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s

$ cat /etc/sysconfig/kubelet 
KUBELET_EXTRA_ARGS="--hostname-override=192.168.18.160 --cert-dir=/root/k8s-1.14.6/kubelet/ssl --pod-infra-container-image=k8s.gcr.io/pause:3.1 --network-plugin=cni"

2.3 配置docker的cgroupdriver为systemd

$ cat /etc/docker/daemon.json 
{
        "exec-opts": ["native.cgroupdriver=systemd"],
        "insecure-registries": ["0.0.0.0/0"],
        "hosts": ["unix:///var/run/docker.sock", "tcp://0.0.0.0:20008"],
        "graph": "/data/docker",
        "storage-driver": "overlay2",
        "storage-opts": [
           "overlay2.override_kernel_check=true"
        ],
        "userland-proxy":false
}

2.4 安装cni网络插件

kubelet参数中

  • --network-plugin=cni启用cni网络插件
  • --cni-conf-dir指定networkconfig目录,默认路径是:/etc/cni/net.d
  • --cni-bin-dir指定插件可执行文件目录,默认路径是:/opt/cni/bin
$ wget https://github.com/containernetworking/plugins/releases/download/v0.7.5/cni-plugins-amd64-v0.7.5.tgz
$ mkdir -pv /opt/cni/bin
$ tar xf cni-plugins-amd64-v0.7.5.tgz -C /opt/cni/bin
$ ls -l /opt/cni/bin
bridge  dhcp  flannel  host-device  host-local  ipvlan  loopback  macvlan  portmap  ptp  sample  tuning  vlan

2.5 创建cni配置文件

本文的k8s集群使用flannel网络插件,修改配置如下:

$ mkdir /etc/cni/net.d/ -pv
$ cat <<EOF > /etc/cni/net.d/10-flannel.conflist
{
  "name": "cbr0",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}
EOF

2.6 启动kubelet.service

$ systemctl daemon-reload
$ systemctl start kubelet.service
k8s1.14.6集群部署 - This article is part of a series.
Part 5: This Article

Related

k8s1.14.6集群搭建之kube-flannel部署
·988 字·5 分钟
Kubernetes Flannel
k8s1.14.6集群搭建之kube-proxy部署
·863 字·5 分钟
Kubernetes
k8s1.14.6集群搭建之ETCD集群部署
·319 字·2 分钟
Kubernetes Etcd