记录 - k8s 入门搭建 (1.16.0, helloweb)

mac2025-06-14  17

文章目录

环境k8s组成部分 所有节点:安装docker,kubelet所有节点:安装k8s的docker镜像master节点:初始化worker节点k8s1:初始化试验 - k8s1节点出错 - 缺镜像build golang "helloweb"镜像部署helloweb补充替代hub.docker.com关于virtualbox虚机网卡设置

环境

VirtualBox虚机 (CentOS 7.4 amd64):

k8s0/master节点: 192.168.199.200k8s1/worker节点: 192.168.199.201

注:主要参考了 https://kuboard.cn/install/history-k8s/install-k8s-1.16.0.html “使用kubeadm安装kubernetes_v1.16.0” 这篇文章,但docker k8s.gcr.io/kubeXX等镜像改用导入方式,通过离线获取这些镜像,可以达到在国内应该也能安装成功。

注意:虚机内存不足时k8s可能卡死,建议master >= 1.5G。

k8s组成部分

可以参考 https://blog.csdn.net/weixin_39686421/article/details/80333015

etcd:分布式存储kube-apiserver:提供api,例如查询podkube-controller-manager:master内的控制中心kube-scheduler:k8s是异步的,当一个部署失败它会持续重试kubelet:管理一个节点的podkube-proxy:外部访问到内部pod的路由pod:包含一个(或多个)容器。支持多种容器实现如docker,rkt等

所有节点:安装docker,kubelet

#不同节点设置不同的主机名:k8s0、k8s1、… hostnamectl set-hostname xx echo “127.0.0.1 $(hostname)” >> /etc/hosts

yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine

yum install yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install docker-ce-18.09.7 docker-ce-cli-18.09.7 containerd.io systemctl enable docker #启动docker systemctl start docker

yum install nfs-utils

#关防火墙 systemctl stop firewalld systemctl disable firewalld

#关selinux

setenforce 0 sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

#关swap swapoff -a yes | cp /etc/fstab /etc/fstab_bak cat /etc/fstab_bak |grep -v swap > /etc/fstab

vi /etc/sysctl.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 sysctl -p

#k8s服务的yum源 vi /etc/yum.repos.d/kubernetes.repo

[kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

yum remove kubelet kubeadm kubectl #安装k8s服务 yum install kubelet-1.16.0 kubeadm-1.16.0 kubectl-1.16.0

#修改docker Cgroup Driver为systemd

sed -i "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service

#docker镜像 curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io

systemctl daemon-reload systemctl restart docker

#启动kubelet。注意此时 systemctl status kubelet 是错误状态 - #这是正常现象,后面会自动恢复 systemctl enable kubelet && systemctl start kubelet

所有节点:安装k8s的docker镜像

kuboard.cn原文中的“registry.cn-hangzhou.aliyuncs.com/google_containers” 似乎不能用。可以在允许的区域docker pull拉取所需镜像,docker save -o导出后再docker load -i导入试验的虚机(例如ali ECS按量付费、入门共享型主机,在关闭主机取消计费时仅几分钱每小时)。

#导入镜像文件 docker load -i xxx

docker images #例如:

REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-apiserver v1.16.0 b305571ca60a 6 weeks ago 217MB k8s.gcr.io/kube-proxy v1.16.0 c21b0c7400f9 6 weeks ago 86.1MB k8s.gcr.io/kube-controller-manager v1.16.0 06a629a7e51c 6 weeks ago 163MB k8s.gcr.io/kube-scheduler v1.16.0 301ddc62b80b 6 weeks ago 87.3MB k8s.gcr.io/etcd 3.3.15-0 b2756210eeab 8 weeks ago 247MB k8s.gcr.io/coredns 1.6.2 bf261d157914 2 months ago 44.1MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 22 months ago 742kB

master节点:初始化

vi /etc/hosts 192.168.199.200 apiserver.demo

#建立一个测试目录 mkdir -p /test/k8s cd /test/k8s

rm -f ./kubeadm-config.yaml vi kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.16.0 # imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers imageRepository: k8s.gcr.io controlPlaneEndpoint: "apiserver.demo:6443" networking: serviceSubnet: "10.96.0.0/16" podSubnet: "10.100.0.1/16" dnsDomain: "cluster.local"

注意上面 imageRepository 改回"k8s.gcr.io"。

#初始化kubernetes,会生成很多ssl证书。约3-10分钟 #注-k8s的docker镜像:如果前面没有手工导入,此命令会首先拉取 kubeadm init --config=kubeadm-config.yaml --upload-certs

#配置 kubectl rm -rf /root/.kube/ mkdir /root/.kube/ cp -i /etc/kubernetes/admin.conf /root/.kube/config

#calico 网络插件

rm -f calico.yaml wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml sed -i "s#192.168.0.0/16#10.100.0.1/16#" calico.yaml kubectl apply -f calico.yaml

#检查master初始化结果 #等待 3-10 分钟,直到所有的容器组处于 Running 状态 watch kubectl get pod -n kube-system -o wide

#例如 (事后截图,包含了k8s1节点):

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-55754f75c-7fc4t 1/1 Running 0 102m 10.100.150.67 k8s0 <none> <none> calico-node-6hcfb 1/1 Running 0 82m 192.168.199.201 k8s1 <none> <none> calico-node-6m8xv 1/1 Running 0 102m 192.168.199.200 k8s0 <none> <none> coredns-5644d7b6d9-qqlpv 1/1 Running 0 105m 10.100.150.66 k8s0 <none> <none> coredns-5644d7b6d9-wzcfb 1/1 Running 0 105m 10.100.150.65 k8s0 <none> <none> etcd-k8s0 1/1 Running 0 105m 192.168.199.200 k8s0 <none> <none> kube-apiserver-k8s0 1/1 Running 0 105m 192.168.199.200 k8s0 <none> <none> kube-controller-manager-k8s0 1/1 Running 0 105m 192.168.199.200 k8s0 <none> <none> kube-proxy-httmr 1/1 Running 0 82m 192.168.199.201 k8s1 <none> <none> kube-proxy-wcqcm 1/1 Running 0 105m 192.168.199.200 k8s0 <none> <none> kube-scheduler-k8s0 1/1 Running 0 105m 192.168.199.200 k8s0 <none> <none>

#查询节点: kubectl get nodes

#例如 (注意:如果状态不是"Ready",则初始化不成功):

NAME STATUS ROLES AGE VERSION k8s0 Ready master 108m v1.16.0

kubeadm token create --print-join-command

#例如:

kubeadm join apiserver.demo:6443 --token 16ce5c.uv9h0jen2ycpzwza --discovery-token-ca-cert-hash sha256:e4132a7d076faf309d3470e6b5b3fd7569b5f2b4d400667e2759a8e3578f7e44

worker节点k8s1:初始化

vi /etc/hosts 192.168.199.200 apiserver.demo

#替换为 master 节点上 kubeadm token create 命令的输出 kubeadm join apiserver.demo:6443 --token 16ce5c.uv9h0jen2ycpzwza --discovery-token-ca-cert-hash sha256:e4132a7d076faf309d3470e6b5b3fd7569b5f2b4d400667e2759a8e3578f7e44 #稍后,执行成功,提示:… This node has joined the cluster …

#在master节点执行: kubectl get nodes

#应该看到worker节点加入了,例如:

NAME STATUS ROLES AGE VERSION k8s0 Ready master 143m v1.16.0 k8s1 Ready <none> 119m v1.16.0

试验 - k8s1节点出错 - 缺镜像

##移除k8s1节点 #在k8s1节点: kubeadm reset #在master节点 (节点名"k8s1" 见上面 kubectl get nodes): kubectl delete node k8s1

##删除k8s1上的k8s docker镜像 #在k8s1节点: docker images #例如:

REPOSITORY TAG IMAGE ID CREATED SIZE calico/node v3.8.4 83b416d24205 2 weeks ago 191MB calico/pod2daemon-flexvol v3.8.4 207f157c99ac 2 weeks ago 9.37MB calico/cni v3.8.4 20d7eefd5ce2 2 weeks ago 157MB k8s.gcr.io/kube-apiserver v1.16.0 b305571ca60a 6 weeks ago 217MB k8s.gcr.io/kube-proxy v1.16.0 c21b0c7400f9 6 weeks ago 86.1MB k8s.gcr.io/kube-controller-manager v1.16.0 06a629a7e51c 6 weeks ago 163MB k8s.gcr.io/kube-scheduler v1.16.0 301ddc62b80b 6 weeks ago 87.3MB k8s.gcr.io/etcd 3.3.15-0 b2756210eeab 8 weeks ago 247MB k8s.gcr.io/coredns 1.6.2 bf261d157914 2 months ago 44.1MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 22 months ago 742kB

#删除

docker rmi -f $(docker images | grep "k8s.gcr.io" | awk '{print $3}')

docker images #例如:

REPOSITORY TAG IMAGE ID CREATED SIZE calico/node v3.8.4 83b416d24205 2 weeks ago 191MB calico/pod2daemon-flexvol v3.8.4 207f157c99ac 2 weeks ago 9.37MB calico/cni v3.8.4 20d7eefd5ce2 2 weeks ago 157MB

#再次加入 kubeadm join apiserver.demo:6443 --token 16ce5c.uv9h0jen2ycpzwza --discovery-token-ca-cert-hash sha256:e4132a7d076faf309d3470e6b5b3fd7569b5f2b4d400667e2759a8e3578f7e44

#在master节点: watch kubectl get node #例如:

NAME STATUS ROLES AGE VERSION k8s0 Ready master 177m v1.16.0 k8s1 NotReady <none> 3m53s v1.16.0

#k8s1一直是"NotReady"状态,代表不成功。

#在k8s1节点-重新导入刚删的镜像: docker load -i xxx

#稍后,在master节点-例如:

NAME STATUS ROLES AGE VERSION k8s0 Ready master 3h v1.16.0 k8s1 Ready <none> 7m19s v1.16.0

#k8s1节点又Ready 了

build golang "helloweb"镜像

参考 https://studygolang.com/articles/13847

hello_web.go

package main import ( "fmt" "net/http" ) func main() { http.Handle("/", http.HandlerFunc(helloWeb)) if err := http.ListenAndServe(":8080", nil); err != nil { fmt.Println("Server error:", err) } } func helloWeb(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Hello, web!\n") }

编译为 hello_web 可执行文件 (7.2M)。

#以下在master节点:

mkdir -p /test/k8s/docker cd /test/k8s/docker vi Dockerfile

FROM alpine WORKDIR /hello ADD . /hello RUN mkdir /lib64 && ln -s /lib/libc.musl-x86_64.so.1 /lib64/ld-linux-x86-64.so.2 EXPOSE 8080 ENTRYPOINT ["./hello_web"]

#把 hello_web 拷过来 (现在目录下有2个文件:Dockerfile hello_web)

#打包为镜像 (注:将"YYY"替换为你的hub.docker.com 账户名) chmod +x ./hello_web docker build -t YYY/helloweb .

##经测试:国内docker login一般会超时,但可以docker pull。 ##kubernetes部署此镜像时,似乎一定要从网上pull (或校验?),即使本地已经有了。 ##所以这里的一个方法是将镜像导出;在允许的区域:导入、docker login、docker push

2019-11注:k8s的imagePullPolicy有三个值:Always、IfNotPresent、Never。默认=IfNotPresent (本地有则不拉取)。 上面之所以本地有了还要拉取,可能是因为image tag里缺少版本。经试验“YYY/helloweb:v0.1”的形式不会主动拉取。

#如上述,上传镜像

部署helloweb

参考 https://kubernetes.io/docs/tutorials/hello-minikube/

#以下在master节点:

alias kc=kubectl

kc create deploy helloweb --image=YYY/helloweb kc get deploy #例如:

NAME READY UP-TO-DATE AVAILABLE AGE helloweb 1/1 1 1 33s

#Ready 如果"0/1" 说明部署不成功,常见原因是镜像pull失败 - #可以 kc logs deploy/helloweb 查询错误信息

#查询deploy产生的pod kc get pod #例如:

NAME READY STATUS RESTARTS AGE helloweb-5dbf4fffdb-cpq6r 1/1 Running 0 4m13s

#(同样,pod如果出问题可以 kc logs pod/helloweb-5dbf4fffdb-cpq6r)

#描述pod kc describe pod/helloweb-5dbf4fffdb-cpq6r #例如:

Name: helloweb-5dbf4fffdb-cpq6r Namespace: default Priority: 0 Node: k8s1/192.168.199.201 ......

#可以看到pod运行在k8s1节点

#进入该pod的(“helloweb”)容器的shell kc exec helloweb-5dbf4fffdb-cpq6r -c helloweb -it /bin/sh

/hello # ls Dockerfile hello_web /hello # wget localhost:8080 Connecting to localhost:8080 (127.0.0.1:8080) index.html 100% |********************************************************************************************| 12 0:00:00 ETA /hello # ls Dockerfile hello_web index.html /hello # cat index.html Hello, web! /hello # rm index.html /hello # exit

#可以看到hello_web程序能够访问。

#将deploy暴露为service kc expose deploy helloweb --type=NodePort --port=8080 kc get service #例如:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE helloweb NodePort 10.96.152.205 <none> 8080:31580/TCP 6s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h55m

#将pod 8080端口映射到宿主机8080

# kc port-forward pod/helloweb-5dbf4fffdb-cpq6r 8080 Forwarding from 127.0.0.1:8080 -> 8080 Forwarding from [::1]:8080 -> 8080

#另开一个宿主机的shell

[root@k8s0 ~]# curl localhost:8080 Hello, web!

#看到在宿主机可以通过8080访问到helloweb (注:发现此时从k8s1机器能ping到master的tun ip 但curl 不到)

#删除service kc delete service/helloweb

#删除deploy kc delete deploy/helloweb

补充

替代hub.docker.com

https://www.cnblogs.com/kcxg/p/11457209.html Docker / Kubernetes 镜像源

https://www.cnblogs.com/legenidongma/p/10721021.html KUBERNETES 本地仓库

关于virtualbox虚机网卡设置

经试验/etc/hosts里 "apiserver.demo"映射的ip 如果变了,k8s集群似乎启不来。

试验环境下:enp0s3=>nic1 (office), enp0s8=>nic2, enp0s9=>nic3 (home)

当从office回到home时,可用0s3 替换0s9 以保留原映射的ip;用原0s9 做新的0s3 - 0s3需要对应实际在用的wifi 否则不能上公网。

最新回复(0)