- 博客/
k8s1.14.6集群搭建之kube-proxy部署
·863 字·5 分钟
Kubernetes
k8s1.14.6集群部署 - This article is part of a series.
Part 7: This Article
kube-proxy组件的部署,可以选择使用Daemonset的方式或者StaticPod的方式
Daemonset方式保证在加入集群的节点上都运行kube-proxy Pod
StaticPod是由kubectl进行管理的运行在该Node上的Pod,不能通过apiserver进行管理
1. Daemonset方式#
在master节点上创建kube-proxy的CRD
$ cd /root/k8s-1.14.6/manifests
$ kubeclt apply -f kube-proxy.yaml
$ cat kube-proxy.yaml
kind: ConfigMap
apiVersion: v1
metadata:
labels:
app: kube-proxy
name: kube-proxy
namespace: kube-system
data:
config.conf: |-
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 5
clusterCIDR: 10.244.0.0/16
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
strictARP: false
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: ""
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms
winkernel:
enableDSR: false
networkName: ""
sourceVip: ""
kubeconfig.conf: |-
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://192.168.18.142:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
- name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-proxy:node-proxier
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-proxier
subjects:
- kind: ServiceAccount
name: kube-proxy
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-proxy
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
k8s-app: kube-proxy
name: kube-proxy
namespace: kube-system
spec:
template:
metadata:
labels:
k8s-app: kube-proxy
spec:
containers:
- command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
- --hostname-override=$(NODE_NAME)
env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: k8s.gcr.io/kube-proxy:v1.14.6
name: kube-proxy
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/lib/kube-proxy
name: kube-proxy
- mountPath: /run/xtables.lock
name: xtables-lock
- mountPath: /lib/modules
name: lib-modules
readOnly: true
hostNetwork: true
priorityClassName: system-node-critical
serviceAccount: kube-proxy
serviceAccountName: kube-proxy
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- operator: Exists
volumes:
- configMap:
defaultMode: 420
name: kube-proxy
name: kube-proxy
- hostPath:
path: /run/xtables.lock
type: FileOrCreate
name: xtables-lock
- hostPath:
path: /lib/modules
name: lib-modules
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
2. 静态Pod的方式#
2.1 master节点上签发kube-proxy的认证证书
$ cd /root/k8s-1.14.6/ssl
$ cat <<EOF > apiserver-kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "",
"L": "",
"O": "",
"OU": "",
"ST": ""
}
]
}
EOF
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes apiserver-kube-proxy-csr.json |cfssljson -bare apiserver-kube-proxy
CN指定该证书的User为system:kube-proxy
,kube-apiserver 预定义的 ClusterRoleBinding system:node-proxier
将User system:kube-proxy
与 ClusterRole stem:node-proxier
绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限
$ kubectl get clusterrolebinding system:node-proxier -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: "2019-07-09T05:58:48Z"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:node-proxier
resourceVersion: "98"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system%3Anode-proxier
uid: e3e61976-d2c6-11e9-9f22-fa163e67ff45
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-proxier
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: system:kube-proxy
2.2 生成kube-proxy访问apiserver的kubeconfig
$ cat create-kube-proxy-kubeconfig.sh
KUBE_APISERVER="https://192.168.18.142:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/root/k8s-1.14.6/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials default \
--client-certificate=/root/k8s-1.14.6/ssl/apiserver-kube-proxy.pem \
--client-key=/root/k8s-1.14.6/ssl/apiserver-kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=default \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
$ sh create-kube-proxy-kubeconfig.sh
将生成的kube-proxy.kubeconfig文件分发到node节点192.168.18.160的/root/k8s-1.14.6/conf/kube-proxy目录下
2.3 node节点上创建kube-prox的y配置文件config.conf
$ cd /root/k8s-1.14.6/conf/kube-proxy
$ ls
config.conf kube-proxy.kubeconfig
$ cat config.conf
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /var/lib/kube-proxy/kube-proxy.kubeconfig
qps: 5
clusterCIDR: 10.244.0.0/16
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
strictARP: false
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: ""
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms
winkernel:
enableDSR: false
networkName: ""
sourceVip: ""
2.4 计算节点上创建kube-proxy的静态pod文件kube-proxy-pod.yaml
$ cd /root/k8s-1.14.6/manifests
apiVersion: v1
kind: Pod
metadata:
labels:
tier: node
k8s-app: kube-proxy
name: kube-proxy
namespace: kube-system
spec:
containers:
- command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
- --hostname-override=$(NODE_NAME)
env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: k8s.gcr.io/kube-proxy:v1.14.6
name: kube-proxy
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/lib/kube-proxy
name: kube-proxy
- mountPath: /run/xtables.lock
name: xtables-lock
- mountPath: /lib/modules
name: lib-modules
readOnly: true
hostNetwork: true
restartPolicy: Always
priorityClassName: system-node-critical
volumes:
- hostPath:
path: /root/k8s-1.14.6/conf/kube-proxy
name: kube-proxy
name: kube-proxy
- hostPath:
path: /run/xtables.lock
type: FileOrCreate
name: xtables-lock
- hostPath:
path: /lib/modules
name: lib-modules
2.5 查看kube-proxy是否正常启动
$ docker ps
88f7d447327e ed8adf767eeb "/usr/local/bin/kube…" 3 minutes ago Up 3 minutes k8s_kube-proxy_kube-proxy-192.168.18.160_kube-system_6a4b4c4a6b6e6cb8c0c89cb8edd2d00c_1
2f0be7471cb1 k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-proxy-192.168.18.160_kube-system_6a4b4c4a6b6e6cb8c0c89cb8edd2d00c_1
$ docker logs 88f7d447327e
W1021 06:17:23.534438 1 proxier.go:498] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1021 06:17:23.535741 1 proxier.go:498] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1021 06:17:23.536945 1 proxier.go:498] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1021 06:17:23.538102 1 proxier.go:498] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1021 06:17:23.543687 1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy
I1021 06:17:23.553870 1 server_others.go:146] Using iptables Proxier.
I1021 06:17:23.554171 1 server.go:562] Version: v1.14.6
I1021 06:17:23.565879 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1021 06:17:23.566114 1 config.go:202] Starting service config controller
I1021 06:17:23.566145 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I1021 06:17:23.566146 1 config.go:102] Starting endpoints config controller
I1021 06:17:23.566178 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I1021 06:17:23.666289 1 controller_utils.go:1034] Caches are synced for service config controller
I1021 06:17:23.666289 1 controller_utils.go:1034] Caches are synced for endpoints config controller
k8s1.14.6集群部署 - This article is part of a series.
Part 7: This Article
Related
k8s1.14.6集群搭建之kube-flannel部署
·988 字·5 分钟
Kubernetes
Flannel
k8s1.14.6集群搭建之node节点部署
·470 字·3 分钟
Kubernetes
k8s1.14.6集群搭建之ETCD集群部署
·319 字·2 分钟
Kubernetes
Etcd