K8s1.25.6部署文档,利用kubeadm+cri-docker部署k8s集群
作者:爱写代码的小白 | 发布时间: | 更新时间:
第一次来猪哥网站投稿,希望我的这篇K8s部署文档能够给大家带来帮助
如有写的不完善的地方还望各界大佬多多指点哦~
| 主机名 | IP | CPU | 内存 |
|---|---|---|---|
| master.example.com | 172.25.80.10 | 4核心 | 8GB |
| node1.example.com | 172.25.80.11 | 4核心 | 8GB |
| node2.example.com | 172.25.80.12 | 4核心 | 8GB |
环境搭建
(所有节点同时操作)
配置主机名
hostnamectl set-hostname master.example.com && bash
hostnamectl set-hostname node1.example.com && bash
hostnamectl set-hostname node2.example.com && bash
配置Hosts文件解析
Edit /etc/hosts
172.25.80.10 master master.example.com
172.25.80.11 node1 node1.example.com
172.25.80.12 node2 node2.example.com
关闭SELINUX以及swap
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
配置YUM仓库
备份原有YUM仓库配置文件并下载阿里云代理的加速源
包含:(base,docker,kubernetes)
cd /etc/yum.repos.d/
mkdir back && mv *.repo back
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum clean all && yum makecache fast
调整内核参数
cat <<EOF> /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
modprobe br_netfilter &&
sysctl -p /etc/sysctl.d/kubernetes.conf
配置时区,时间同步
配置时区
timedatectl set-timezone Asia/Shanghai
timedatectl set-local-rtc 0
systemctl restart rsyslog /
crond
NTP时间同步(采用chrony)
yum remove -y ntp ntpdate
yum -y install chrony
编辑:/etc/chrony.conf
注释掉原先配置,并将时间服务器更改为阿里云服务器
参考:https://help.aliyun.com/document_detail/92704.html
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ntp.aliyun.com iburst
systemctl enable --now chronyd #启动该服务并设置为开启启动
验证 (带* 即为同步成功)
chronyc sources -v
210 Number of sources = 1
.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| / '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 203.107.6.88 2 6 7 1 -1258us[+28752s] +/- 19ms
配置防火墙
systemctl stop firewalld && systemctl disable firewalld
yum -y install iptables-services && systemctl start iptables && systemctl enable
iptables -F && iptables-save
开启ipvs
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
配置持久化
mkdir /var/log/journal
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
# 持久化保存到磁盘
Storage=persistent
# 压缩历史日志
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
# 最大占用空间 10G
SystemMaxUse=10G
# 单日志文件最大 200M
SystemMaxFileSize=200M
# 日志保存时间 2 周
MaxRetentionSec=2week
# 不将日志转发到 syslog
ForwardToSyslog=no
EOF
systemctl restart systemd-journald
安装Docker并配置镜像加速
(所有节点同时操作)
如需指定版本可以使用 yum list docker-ce --showduplicates | sort -r 查看版本信息
通过yum -y install docker-ce-xx-xx.x安装指定版本
yum install -y vim yum-utils device-mapper-persistent-data lvm2
yum install -y docker-ce
配置Docker镜像加速
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://vwlrpbcp.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload && systemctl restart docker && systemctl enable --now docker
安装cri-docker 使kubernetes以docker作为运行时
cri-docker项目文档
https://github.com/Mirantis/cri-dockerd/blob/master/README.md
rpm -ivh https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.1/cri-dockerd-0.3.1-3.el7.x86_64.rpm
修改cri-docker.service文件
位于:/usr/lib/systemd/system/
将原先的ExecStart替换为如下内容
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.8 --container-runtime-endpoint
systemctl daemon-reload && systemctl enable --now cri-docker cri-docker.socket
安装K8S
yum install -y kubelet-1.25.6 kubeadm-1.25.6 kubectl-1.25.6
systemctl enable --now kubelet
部署Master节点
kubeadm reset --cri-socket=unix:///var/run/cri-dockerd.sock
kubeadm init --kubernetes-version=v1.25.6 --pod-network-cidr=10.224.0.0/16 --apiserver-advertise-address=172.25.80.10 --cri-socket unix:///var/run/cri-dockerd.sock --image-repository=registry.aliyuncs.com/google_containers
# 国内服务器必须指定repository仓库,默认官方源被墙无法使用
部署成功之后; 按照日志说明进行配置环境变量

# 临时生效
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#永久生效(推荐)
export KUBECONFIG=/etc/kubernetes/admin.conf
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
安装 Pod 网络插件(CNI)
wget https://docs.projectcalico.org/manifests/calico.yaml
kubectl apply -f calico.yaml
Node节点部署
根据提示把node节点加如到master节点中,复制你们各自在日志里的提示,然后分别粘贴在2个node节点上,最后回车即可(注意要在后面加上--cri-socket unix:///var/run/cri-dockerd.sock这一参数,不然会失败)
如下图所示,最终提示"This node has joined the cluster" 即为加入成功

