前提条件:安装 CentOS7,配好各个机器的 IP 地址。
假设有一个 master 节点和两个 node 节点,以及设置好了他们的 IP。
k8s-master01 192.168.159.10k8s-node01 192.168.159.20k8s-node02 192.168.159.21
系统初始化
默认情况下,以下命令需要分别在每个机器上执行。
设置系统主机名以及 Host 文件的相互解析
设置主机名
$ hostnamectl set-hostname k8s-master01
$ hostnamectl set-hostname k8s-node01
$ hostnamectl set-hostname k8s-node02
修改 Host 文件
$
vi /etc/hosts
192.168.159.10 k8s-master01
192.168.159.20 k8s-node01
192.168.159.21 k8s-node02
$
scp /etc/hosts root@k8s-node01:/etc/hosts
$
scp /etc/hosts root@k8s-node02:/etc/hosts
安装依赖包
$ yum
install -y conntrack ntpdate ntp ipvsadm ipset jq iptables
curl sysstat libseccomp
wget vim net-tools
git
设置防火墙为 Iptables 并设置空规则
$ systemctl stop firewalld
&& systemctl disable firewalld
$ yum -y
install iptables-services
&& systemctl start iptables
&& systemctl
enable iptables
&& iptables -F
&& service iptables save
关闭 SELINUX
$ swapoff -a
&& sed -i
'/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
$ setenforce 0
&& sed -i
's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
调整内核参数,对于 K8S
$
cat > kubernetes.conf
<< EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
$
cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
$ sysctl -p /etc/sysctl.d/kubernetes.conf
调整系统时区
$ timedatectl set-timezone Asia/Shanghai
$ timedatectl set-local-rtc 0
$ systemctl restart rsyslog
$ systemctl restart crond
关闭系统不需要服务
$ systemctl stop postfix
&& systemctl disable postfix
设置 rsyslogd 和 systemd journald
$
mkdir /var/log/journal
$
mkdir /etc/systemd/journald/journald.conf.d
$
cat > /etc/systemd/journald.conf.d/99-prophet.conf
<< EOF
[Journal]
# 持久化保存到磁盘
Storage=persistent
# 压缩历史日志
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
# 最大占用空间 10G
SystemMaxUse=10G
# 单日志文件最大 200M
SystemMaxFileSize=200M
# 日志保存时间 2 周
MaxRetentionSec=2week
# 不将日志转发到 syslog
ForwardToSyslog=no
EOF
$ systemctl restart systemd-journald
升级系统内核为 4.44
(查看 Linux 所有内核命令:awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg)
$ rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
$ yum --enablerepo
=elrepo-kernel
install -y kernel-lt
$ grub2-set-default
"CentOS Linux (4.4.198-1.el7.elrepo.x86_64) 7 (Core)"
$
reboot
$
uname -r
4.4.198-1.el7.elrepo.x86_64
Kubeadm 部署安装
kube-proxy 开启 ipvs 的前置条件
$ modprobe br_netfilter
$
cat > /etc/sysconfig/modules/ipvs.modules
<< EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
$
chmod 755 /etc/sysconfig/modules/ipvs.modules
&& bash /etc/sysconfig/modules/ipvs.modules
&& lsmod
| grep -e ip_vs -e nf_conntrack_ipv4
安装 Docker 软件
$ yum
install -y yum-utils device-mapper-persistent-data lvm2
$ yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
$ yum
install -y docker-ce
$
reboot
$
uname -r
3.10.0-1062.4.1.el7.x86_64
$ grub2-set-default
"CentOS Linux (4.4.198-1.el7.elrepo.x86_64) 7 (Core)" && reboot
$
uname -r
4.4.198-1.el7.elrepo.x86_64
$ systemctl start docker
$ systemctl
enable docker
$
cat > /etc/docker/daemon.json
<< EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"registry-mirrors": ["https://p02s6s7i.mirror.aliyuncs.com"]
}
EOF
$
mkdir -p /etc/systemd/system/docker.service.d
$ systemctl daemon-reload
&& systemctl restart docker
&& systemctl
enable docker
安装 Kubeadm(主从配置)
$
cat << EOF
> /etc/yum.repos.d/kubernetes.repo
[kubernetes
]
name
=Kubernetes
baseurl
=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled
=1
gpgcheck
=0
repo_gpgcheck
=0
gpgkey
=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
$ yum -y
install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1
$ systemctl
enable kubelet.service
预先拉取镜像
由于 k8s.gcr.io 无法在国内访问到,所以预先加载一下构建集群所需的镜像。
$
cat > images
<< EOF
k8s.gcr.io/kube-proxy:v1.15.1=gotok8s/kube-proxy:v1.15.1
k8s.gcr.io/kube-controller-manager:v1.15.1=gotok8s/kube-controller-manager:v1.15.1
k8s.gcr.io/kube-scheduler:v1.15.1=gotok8s/kube-scheduler:v1.15.1
k8s.gcr.io/kube-apiserver:v1.15.1=gotok8s/kube-apiserver:v1.15.1
k8s.gcr.io/coredns:1.3.1=gotok8s/coredns:1.3.1
k8s.gcr.io/pause:3.1=gotok8s/pause:3.1
k8s.gcr.io/etcd:3.3.10=gotok8s/etcd:3.3.10
quay.io/coreos/flannel:v0.11.0-amd64=jmgao1983/flannel:v0.11.0-amd64
EOF
$
cat > download_images.sh
<< EOF
#!/bin/bash
file="images"
if [ -f "\$file" ]
then
echo "\$file found."
while IFS='=' read -r key value
do
docker pull \${value}
docker tag \${value} \${key}
docker rmi \${value}
done < "\$file"
else
echo "\$file not found."
fi
EOF
$
chmod a+x download_images.sh
$ ./download_images.sh
$
cat > save_images.sh
<< EOF
#!/bin/bash
mkdir -p ~/kubeadm-images
file="images"
if [ -f "\$file" ]
then
echo "\$file found."
while IFS='=' read -r key value
do
imagename=\${key/\//-}
imagename=\${imagename/\:/-}
docker save \${key} > ~/kubeadm-images/\${imagename}.tar
done < "\$file"
else
echo "\$file not found."
fi
EOF
$
chmod a+x save_images.sh
$ ./save_images.sh
$
scp -r kubeadm-images images root@k8s-node01:/root/
$
scp -r kubeadm-images images root@k8s-node02:/root/
$
cat > load_images.sh
<< EOF
#!/bin/bash
cd ~/kubeadm-images
for i in \$(ls)
do
docker load -i \$i
done
EOF
$
chmod a+x load_images.sh
$ ./load_images.sh
初始化主节点
$ kubeadm config print init-defaults
> kubeadm-conf.yaml
$ vim kubeadm-conf.yaml
...
$ kubeadm init --config
=kubeadm-conf.yaml --experimental-upload-certs
| tee kubeadm-init.log
$
mkdir -p
$HOME/.kube
$
sudo cp -i /etc/kubernetes/admin.conf
$HOME/.kube/config
$
sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get node
初始化的 kubeadm-conf.yaml 配置文件,如下
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system
:bootstrappers
:kubeadm
:default
-node
-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.159.10
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s
-master01
taints:
- effect: NoSchedule
key: node
-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/vlalphal
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
部署网络
$
wget -e robots
=off https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
$ kubectl create -f kube-flannel.yml
$ kubectl get pod -n kube-system
$ kubectl get node
最后,将工作节点加入
$ kubeadm
join 192.168.159.10:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:21a95fb4ed9595304733b1d709187275fe3dd8aaf7700c3aec692441e4cd5bb7
$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 14h v1.15.1
k8s-node01 Ready
<none
> 25s v1.15.1
k8s-node02 Ready
<none
> 16s v1.15.1
$ kubectl get pod -n kube-system -o wide
最后,将家目录下的那些配置文件和日志文件收集起来,以便以后使用。