Kubernetes(k8s)1.14 离线版集群 - 部署work节点

mac2024-05-21  31

声明: 如果您有更好的技术与作者分享,或者商业合作; 请访问作者个人网站 http://www.esqabc.com/view/message.html 留言给作者。 如果该案例触犯您的专利,请在这里:http://www.esqabc.com/view/message.html 留言给作者说明原由 作者一经查实,马上删除。

1、搭建前说明

a、kubernetes - master节点运行组件如下:

docker kubelet kube-proxy flanneld kube-nginx

如没有特殊说明,一般都在k8s-01服务器操作

前提提条件、服务器,请查看这个地址:https://blog.csdn.net/esqabc/article/details/102726771

2、安装依赖包

注意:在所有服务器执行

[root@k8s-01 ~]# cd /opt/k8s/work . [root@k8s-01 work]# yum install -y epel-release . [root@k8s-01 work]# yum install -y conntrack ipvsadm ntp ntpdate ipset jq iptables curl sysstat libseccomp && modprobe ip_vs

3、部署Docker组件

注意:在所有服务器执行

a、创建配置文件

[root@k8s-01 ~]# mkdir -p /etc/docker/ [root@k8s-01 ~]# cat > /etc/docker/daemon.json <<EOF 添加下面内容:

{ "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": ["https://hjvrgh7a.mirror.aliyuncs.com"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF

注意:要添加我们harbor仓库需要在添加下面内容: www.esqabc.com:就是仓库地址

{ "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": ["https://hjvrgh7a.mirror.aliyuncs.com"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "insecure-registries": ["www.esqabc.com"], "storage-driver": "overlay2" } EOF

b、安装Docker请查看这篇文章:https://blog.csdn.net/esqabc/article/details/89881374

c、修改Docker启动参数

[root@k8s-01 ~]# vi /usr/lib/systemd/system/docker.service 添加下面内容

EnvironmentFile=-/run/flannel/docker ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock

或者直接替换,完整配置如下

[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock EnvironmentFile=-/run/flannel/docker ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. # Both the old, and new location are accepted by systemd 229 and up, so using the old location # to make them work for either version of systemd. StartLimitBurst=3 # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make # this option work for either version of systemd. StartLimitInterval=60s # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Comment TasksMax if your systemd version does not support it. # Only systemd 226 and above support this option. TasksMax=infinity # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target

d、重新启动doacker

[root@k8s-01 work]# systemctl daemon-reload && systemctl enable docker && systemctl restart docker

e、检查服务运行状态

[root@k8s-01 ~]# cd /opt/k8s/work

for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl status docker|grep Active" done

e、检查 docker0 网桥

[root@k8s-01 ~]# cd /opt/k8s/work

for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "/usr/sbin/ip addr show flannel.1 && /usr/sbin/ip addr show docker0" done

4、部署kubelet组件

kubelet运行在每个worker节点上接收kube-apiserver发送的请求,管理Pod容器,执行交互命令

a、创建kubelet bootstrap kubeconfig文件 注意该操作,在所有服务器执行

4、部署kubelet组件

kubelet运行在每个worker节点上,接收kube-apiserver发送的请求,管理Pod容器,执行交互命令kubelet启动时自动向kube-apiserver注册节点信息,内置的cAdivsor统计和监控节点的资源使用资源情况。为确保安全,部署时关闭了kubelet的非安全http端口,对请求进行认证和授权,拒绝未授权的访问

a、创建kubelet bootstrap kubeconfig文件

[root@k8s-01 ~]# cd /opt/k8s/work

for node_name in ${NODE_NAMES[@]} do echo ">>> ${node_name}" # 创建 token export BOOTSTRAP_TOKEN=$(kubeadm token create \ --description kubelet-bootstrap-token \ --groups system:bootstrappers:${node_name} \ --kubeconfig ~/.kube/config) # 设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/cert/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig # 设置客户端认证参数 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig # 设置上下文参数 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig done

b、查看kubeadm为各个节点创建的token

[root@k8s-01 ~]# kubeadm token list --kubeconfig ~/.kube/config 正常图示:

c、查看各token关联的Secret

[root@k8s-01 ~]# kubectl get secrets -n kube-system|grep bootstrap-token

d、分发 bootstrap kubeconfig 文件到所有 worker 节点’

[root@k8s-01 ~]# cd /opt/k8s/work

for node_name in ${NODE_NAMES[@]} do echo ">>> ${node_name}" scp kubelet-bootstrap-${node_name}.kubeconfig root@${node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig done

e、创建和分发kubelet参数配置

[root@k8s-01 ~]# cd /opt/k8s/work [root@k8s-01 work]# cat > kubelet-config.yaml.template <<EOF

kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: "##NODE_IP##" staticPodPath: "" syncFrequency: 1m fileCheckFrequency: 20s httpCheckFrequency: 20s staticPodURL: "" port: 10250 readOnlyPort: 0 rotateCertificates: true serverTLSBootstrap: true authentication: anonymous: enabled: false webhook: enabled: true x509: clientCAFile: "/etc/kubernetes/cert/ca.pem" authorization: mode: Webhook registryPullQPS: 0 registryBurst: 20 eventRecordQPS: 0 eventBurst: 20 enableDebuggingHandlers: true enableContentionProfiling: true healthzPort: 10248 healthzBindAddress: "##NODE_IP##" clusterDomain: "${CLUSTER_DNS_DOMAIN}" clusterDNS: - "${CLUSTER_DNS_SVC_IP}" nodeStatusUpdateFrequency: 10s nodeStatusReportFrequency: 1m imageMinimumGCAge: 2m imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 volumeStatsAggPeriod: 1m kubeletCgroups: "" systemCgroups: "" cgroupRoot: "" cgroupsPerQOS: true cgroupDriver: systemd runtimeRequestTimeout: 10m hairpinMode: promiscuous-bridge maxPods: 220 podCIDR: "${CLUSTER_CIDR}" podPidsLimit: -1 resolvConf: /etc/resolv.conf maxOpenFiles: 1000000 kubeAPIQPS: 1000 kubeAPIBurst: 2000 serializeImagePulls: false evictionHard: memory.available: "100Mi" nodefs.available: "10%" nodefs.inodesFree: "5%" imagefs.available: "15%" evictionSoft: {} enableControllerAttachDetach: true failSwapOn: true containerLogMaxSize: 20Mi containerLogMaxFiles: 10 systemReserved: {} kubeReserved: {} systemReservedCgroup: "" kubeReservedCgroup: "" enforceNodeAllocatable: ["pods"] EOF

说明一下:

address:kubelet 安全端口(https,10250)监听的地址,不能为 127.0.0.1,否则 kube-apiserver、heapster 等不能调用 kubelet 的 API;readOnlyPort=0:关闭只读端口(默认 10255),等效为未指定;authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTP 证书认证;authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;对于未通过 x509 证书和 webhook 认证的请求(kube-apiserver 或其他客户端),将被拒绝,提示 Unauthorized;authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查询 kube-apiserver 某 user、group 是否具有操作资源的权限(RBAC);featureGates.RotateKubeletClientCertificate、featureGates.RotateKubeletServerCertificate:自动 rotate 证书,证书的有效期取决于 kube-controller-manager 的 –experimental-cluster-signing-duration 参数;

注意:需要 root 账户运行;

f、为各个节点创建和分发kubelet配置文件

[root@k8s-01 ~]# cd /opt/k8s/work

for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" sed -e "s/##NODE_IP##/${node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_ip}.yaml.template scp kubelet-config-${node_ip}.yaml.template root@${node_ip}:/etc/kubernetes/kubelet-config.yaml done

g、创建和分发kubelet启动文件 (1)创建

[root@k8s-01 ~]# cd /opt/k8s/work [root@k8s-01 ~]# cat > kubelet.service.template <<EOF 添加下面内容:

[Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=${K8S_DIR}/kubelet ExecStart=/opt/k8s/bin/kubelet \\ --allow-privileged=true \\ --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\ --cert-dir=/etc/kubernetes/cert \\ --cni-conf-dir=/etc/cni/net.d \\ --container-runtime=docker \\ --container-runtime-endpoint=unix:///var/run/dockershim.sock \\ --root-dir=${K8S_DIR}/kubelet \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --config=/etc/kubernetes/kubelet-config.yaml \\ --hostname-override=##NODE_NAME## \\ --pod-infra-container-image=gcr.azk8s.cn/google_containers/pause-amd64:3.1 \\ --image-pull-progress-deadline=15m \\ --volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \\ --logtostderr=true \\ --v=2 Restart=always RestartSec=5 StartLimitInterval=0 [Install] WantedBy=multi-user.target EOF

说明一下:

如果设置了 –hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况;bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求;K8S approve kubelet 的 csr 请求后,在 –cert-dir 目录创建证书和私钥文件,然后写入 –kubeconfig 文件;pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest 镜像,它不能回收容器的僵尸

(2)分发

[root@k8s-01 ~]# cd /opt/k8s/work

for node_name in ${NODE_NAMES[@]} do echo ">>> ${node_name}" sed -e "s/##NODE_NAME##/${node_name}/" kubelet.service.template > kubelet-${node_name}.service scp kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service done

注意:创建user和group的CSR权限,不创建kubelet会启动失败

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

h、启动 kubelet 服务

[root@k8s-01 ~]# cd /opt/k8s/work

for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/" ssh root@${node_ip} "/usr/sbin/swapoff -a" ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet" done

i、查看状态

[root@k8s-01 ~]# kubectl get csr . NAME AGE REQUESTOR CONDITION csr-22kt2 38s system:bootstrap:pkkcl0 Pending csr-f9trc 37s system:bootstrap:tubfqq Pending csr-v7jt2 38s system:bootstrap:ds9td8 Pending csr-zrww2 37s system:bootstrap:hy5ssz Pending

这里4个节点均处于pending(等待)状态

j、自动approve CSR请求,创建三个ClusterRoleBinding,分别用于自动approve client、renew client、renew server证书

[root@k8s-01 ~]# cd /opt/k8s/work [root@k8s-01 ~]# cat > csr-crb.yaml <<EOF 添加下面内容:

# Approve all CSRs for the group "system:bootstrappers" kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: auto-approve-csrs-for-group subjects: - kind: Group name: system:bootstrappers apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclient apiGroup: rbac.authorization.k8s.io --- # To let a node of the group "system:nodes" renew its own credentials kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: node-client-cert-renewal subjects: - kind: Group name: system:nodes apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient apiGroup: rbac.authorization.k8s.io --- # A ClusterRole which instructs the CSR approver to approve a node requesting a # serving cert matching its client cert. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: approve-node-server-renewal-csr rules: - apiGroups: ["certificates.k8s.io"] resources: ["certificatesigningrequests/selfnodeserver"] verbs: ["create"] --- # To let a node of the group "system:nodes" renew its own server credentials kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: node-server-cert-renewal subjects: - kind: Group name: system:nodes apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: approve-node-server-renewal-csr apiGroup: rbac.authorization.k8s.io EOF

[root@k8s-01 ~]# kubectl apply -f csr-crb.yaml

说明一下:

auto-approve-csrs-for-group 自动approve node的第一次CSR,注意第一次CSR时,请求的Group为system:bootstrappersnode-client-cert-renewal 自动approve node后续过期的client证书,自动生成的证书Group为system:nodesnode-server-cert-renewal 自动approve node后续过期的server证书,自动生成的证书Group

k、查看kubelet 等待1-10分钟,3个节点的CSR都会自动approved

[root@k8s-01 ~]# kubectl get csr . NAME AGE REQUESTOR CONDITION csr-22kt2 4m48s system:bootstrap:pkkcl0 Approved,Issued csr-d8tvc 77s system:node:k8s-01 Pending csr-f9trc 4m47s system:bootstrap:tubfqq Approved,Issued csr-kcdvx 76s system:node:k8s-02 Pending csr-m8k8t 75s system:node:k8s-04 Pending csr-v7jt2 4m48s system:bootstrap:ds9td8 Approved,Issued csr-wwvwd 76s system:node:k8s-03 Pending csr-zrww2 4m47s system:bootstrap:hy5ssz Approved,Issued

目前所有节点均为ready状态

[root@k8s-01 ~]# kubectl get node . NAME STATUS ROLES AGE VERSION k8s-01 Ready 2m29s v1.14.2 k8s-02 Ready 2m28s v1.14.2 k8s-03 Ready 2m28s v1.14.2 k8s-04 Ready 2m27s v1.14.2

kube-controller-manager为各node生成了kubeconfig文件和公钥

[root@k8s-01 ~]# ls -l /etc/kubernetes/kubelet.kubeconfig [root@k8s-01 ~]# ls -l /etc/kubernetes/cert/|grep kubelet

l、手动approve server cert csr

[root@k8s-01 ~]# kubectl get csr | grep Pending | awk ‘{print $1}’ | xargs kubectl certificate approve m、查看kubelet API接口 [root@k8s-01 ~]# netstat -lntup|grep kubelet 说明一下:

10248: healthz http 服务;10250: https 服务,访问该端口时需要认证和授权(即使访问 /healthz 也需要);未开启只读端口 10255;从 K8S v1.10 开始,去除了 –cadvisor-port 参数(默认 4194 端口),不支持访问 cAdvisor UI & API

n、bear token认证和授权

kubectl create sa kubelet-api-test kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}') TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}') echo ${TOKEN}

正常图示: 5、部署kube-proxy组件 a、创建kube-proxy证书签名请求

[root@k8s-01 ~]# cd /opt/k8s/work [root@k8s-01 ~]# cat > kube-proxy-csr.json <<EOF 添加下面内容:

{ "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "4Paradigm" } ] } EOF

说明一下:

CN:指定该证书的 User 为 system:kube-proxy;预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空;

b、生成证书和私钥:

[root@k8s-01 ~]# cd /opt/k8s/work

cfssl gencert -ca=/opt/k8s/work/ca.pem \ -ca-key=/opt/k8s/work/ca-key.pem \ -config=/opt/k8s/work/ca-config.json \ -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

[root@k8s-01 ~]# ls kube-proxy* c、创建和分发 kubeconfig 文件 (1)创建 [root@k8s-01 ~]# cd /opt/k8s/work

kubectl config set-cluster kubernetes \ --certificate-authority=/opt/k8s/work/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=kube-proxy.pem \ --client-key=kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

(2)分发

[root@k8s-01 ~]# cd /opt/k8s/work

for node_name in ${NODE_NAMES[@]} do echo ">>> ${node_name}" scp kube-proxy.kubeconfig root@${node_name}:/etc/kubernetes/ done

d、创建kube-proxy配置文件

[root@k8s-01 ~]# cd /opt/k8s/work [root@k8s-01 ~]# cat > kube-proxy-config.yaml.template <<EOF 添加下面内容:

kind: KubeProxyConfiguration apiVersion: kubeproxy.config.k8s.io/v1alpha1 clientConnection: burst: 200 kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig" qps: 100 bindAddress: ##NODE_IP## healthzBindAddress: ##NODE_IP##:10256 metricsBindAddress: ##NODE_IP##:10249 enableProfiling: true clusterCIDR: ${CLUSTER_CIDR} hostnameOverride: ##NODE_NAME## mode: "ipvs" portRange: "" kubeProxyIPTablesConfiguration: masqueradeAll: false kubeProxyIPVSConfiguration: scheduler: rr excludeCIDRs: [] EOF bindAddress: 监听地址;clientConnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;clusterCIDR: kube-proxy 根据 –cluster-cidr判断集群内部和外部流量,指定 –cluster-cidr 或 –masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;hostnameOverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;mode: 使用 ipvs 模式;

e、分发kube-proxy配置文件

[root@k8s-01 ~]# cd /opt/k8s/work

for (( i=0; i < 4; i++ )) do echo ">>> ${NODE_NAMES[i]}" sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-proxy-config.yaml.template > kube-proxy-config-${NODE_NAMES[i]}.yaml.template scp kube-proxy-config-${NODE_NAMES[i]}.yaml.template root@${NODE_NAMES[i]}:/etc/kubernetes/kube-proxy-config.yaml done

f、创建和分发 kube-proxy systemd unit 文件 (1)创建

[root@k8s-01 ~]# cd /opt/k8s/work [root@k8s-01 ~]# cat > kube-proxy.service <<EOF 添加下面内容:

[Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] WorkingDirectory=${K8S_DIR}/kube-proxy ExecStart=/opt/k8s/bin/kube-proxy \\ --config=/etc/kubernetes/kube-proxy-config.yaml \\ --logtostderr=true \\ --v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF

(2)分发

[root@k8s-01 ~]# cd /opt/k8s/work

for node_name in ${NODE_NAMES[@]} do echo ">>> ${node_name}" scp kube-proxy.service root@${node_name}:/etc/systemd/system/ done

g、启动 kube-proxy 服务

[root@k8s-01 ~]# cd /opt/k8s/work

for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-proxy" ssh root@${node_ip} "modprobe ip_vs_rr" ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy" done

h、检查启动结果

[root@k8s-01 ~]# cd /opt/k8s/work

for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl status kube-proxy|grep Active" done

正常图示: i、检查监听端口

[root@k8s-01 ~]# cd /opt/k8s/work

netstat -lnpt|grep kube-prox

j、查看ipvs路由规则

[root@k8s-01 ~]# cd /opt/k8s/work

for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "/usr/sbin/ipvsadm -ln" done

k、验证集群功能 现在使用daemonset验证master和worker节点是否正常

[root@k8s-01 ~]# cd /opt/k8s/work . NAME STATUS ROLES AGE VERSION k8s-01 Ready 20m v1.14.2 k8s-02 Ready 20m v1.14.2 k8s-03 Ready 20m v1.14.2 k8s-04 Ready 20m v1.14.2

创建测试yaml文件

[root@k8s-01 work]# cat > nginx-ds.yml <<EOF 添加下面内容:

apiVersion: v1 kind: Service metadata: name: nginx-ds labels: app: nginx-ds spec: type: NodePort selector: app: nginx-ds ports: - name: http port: 80 targetPort: 80 --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: nginx-ds labels: addonmanager.kubernetes.io/mode: Reconcile spec: template: metadata: labels: app: nginx-ds spec: containers: - name: my-nginx image: daocloud.io/library/nginx:1.13.0-alpine ports: - containerPort: 80 EOF

[root@k8s-01 ~]# kubectl create -f nginx-ds.yml

l、查看pod启动情况

[root@k8s-01 ~]# cd /opt/k8s/work [root@k8s-01 work]# kubectl get pod -o wide

l、检查各节点的Pod IP 连通性

[root@k8s-01 ~]# cd /opt/k8s/work [root@k8s-01 ~]# ping -c 172.30.48.2

m、检查服务IP和端口可达性

[root@k8s-01 ~]# cd /opt/k8s/work [root@k8s-01 work]# kubectl get svc |grep nginx-ds

最新回复(0)