1、安装Docker
参考文档:安装Docker
2、安装 etcd
参考文档:安装 etcd
3、安装 Kubernetes
需要在集群上的所有机器上安装 Kubernetes。
1)配置阿里源
Debian/Ubuntu:
apt-get update && apt-get install -y apt-transport-https curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" >>/etc/apt/sources.list.d/kubernetes.list
CentOS/RHEL/Fedora:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttps://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
注意:CentOS/RHEL/Fedora中baseurl
的地址,根据不同版本配置的,如不是kubernetes-el7-x86_64
,访问https://mirrors.aliyun.com/kubernetes/下载查看。
2)关闭selinux
selinux需要将修改/etc/selinux/config
,将SELINUX设置为disable
;
vi /etc/selinux/config
改成如下:
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of three two values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted
重启机器即可生效
查看SELinux状态:
/usr/sbin/sestatus -v ##如果SELinux status参数为enabled即为开启状态
或
getenforce ##也可以用这个命令检查
3)关闭每台服务器的SWAP
swapoff -a ; sed -i '/swap/d' /etc/fstab
3)每台服务器上配置iptables设置
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF
sysctl --system
4)每台机器上安装Kubernetes(k8s)
CentOS/RHEL/Fedora:
yum install -y kubelet-1.21.0-0 kubeadm-1.21.0-0 kubectl-1.21.0-0 --disableexcludes=kubernetes
注意:--disableexcludes=kubernetes
禁掉除了kubernetes之外的别的仓库
Debian/Ubuntu:
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
注意:apt-mark hold
是禁用kubelet
kubeadm
kubectl
的自动更新
5)启动Kubernetes(k8s)
systemctl restart kubelet
6)配置Kubernetes(k8s)开机启动
systemctl enable kubelet
注意:Kubernetes(k8s)的版本命名,例如,版本号为x.y.z-0
,其中x
为大版本号,y
为小版本号,z
为补丁号。版本前面一般会加一个v
,表示version
。
查看yum中可安装Kubernetes(k8s)版本的命令如下:
yum list --showduplicates kubeadm --disableexcludes=kubernetes
4、初始化和配置master集群
1)在/etc/hosts设置 Kubernetes 主节点和节点
vim /etc/hosts
配置添加192.168.31.21 kube-master
和192.168.31.31 kube-minion
如下:
192.168.31.21 kube-master
192.168.31.31 kube-minion
2)master的核心组件,例如,api-server、scheduler、kubelet、etcd、scheduler等都是以容器的方式运行,由于默认的国外镜像站点可能无法访问,需要修改为阿里云镜像下载,同时需要指定版本和pod的网段地址:
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.21.0 --pod-network-cidr=10.244.0.0/16
注意:--image-repository
参数,默认值是k8s.gcr.io
,我们将其指定为国内镜像地址:registry.aliyuncs.com/google_containers
,还需要指定--kubernetes-version
参数,因为它的默认值是stable-1
,会导致从https://dl.k8s.io/release/stable-1.txt
下载最新的版本号,我们可以将其指定为固定版本。
如需要重新初始化,可以执行如下命令:
kubeadm reset
从 k8s.gcr.io 拉取镜像报错:
[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0: output: Error response from daemon: pull access denied for registry.aliyuncs.com/google_containers/coredns/coredns, repository does not exist or may require 'docker login' , error: exit status 1
可以使用如下方法解决:
docker pull coredns/coredns:1.8.0 docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
然后,在重新执行上面的命令。
3)创建master的Kubernetes(k8s)认证文件
上面初始化完成后会有提示信息,依次复制提信息中的三条命令即可:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
相关文档:Kubernetes(k8s) 简介及安装与配置方法
5、Kubernetes节点配置
master中执行命令如下:
kubeadm token create --print-join-command
将输出的命令内容复制到kube-minion Node
上执行即可,master中查看nodes命令如下:
[root@kube-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kube-master NotReady control-plane,master x v1.21.0 kube-minion NotReady <none> x v1.21.0
kube-minion加入到了集群中;但是STATUS是NotReady,原因是各个Pod之间无法通信,需要安装CNI网络插件。安装可以通过calico.yaml
或calico-etcd.yaml
配置文件,安装calico命令如下:
1)获取calico文件和镜像
使用calico.yaml安装:
curl https://docs.projectcalico.org/v3.19/manifests/calico.yaml -o calico.yaml
查看calico.yaml中需要的镜像,
[root@kube-master ~]# grep image calico.yaml image: docker.io/calico/cni:v3.19.3 image: docker.io/calico/cni:v3.19.3 image: docker.io/calico/pod2daemon-flexvol:v3.19.3 image: docker.io/calico/node:v3.19.3 image: docker.io/calico/kube-controllers:v3.19.3
docker pull
拉取下来所需要镜像,并保存打包成tar
文件传到work nodes上,在每台机器上导入回镜像:
docker save calico/cni calico/kube-controllers calico/node calico/pod2daemon-flexvol > calico-3.19-img.tar scp calico-3.19-img.tar kube-minion:~ [root@kube-minion ~]# docker load -i calico-3.19-img.tar
或者
使用calico-etcd.yaml:
curl https://docs.projectcalico.org/manifests/calico-etcd.yaml -o calico.yaml
查看calico.yaml中需要的镜像,
[root@kube-master ~]# grep image calico.yaml
image: docker.io/calico/cni:v3.21.2
image: docker.io/calico/pod2daemon-flexvol:v3.21.2
image: docker.io/calico/node:v3.21.2
image: docker.io/calico/kube-controllers:v3.21.2
可以使用docker images查看每台机器上镜像是否加载成功,然后修改calico.yaml文件:
docker pull 拉取下来所需要镜像,并保存打包成tar文件传到work nodes上,在每台机器上导入回镜像:
docker save calico/cni calico/kube-controllers calico/node calico/pod2daemon-flexvol > calico-3.21.2-img.tar
scp calico-3.21.2-img.tar kube-minion:~
[root@kube-minion ~]# docker load -i calico-3.21.2-img.tar
注意:使用calico-etcd.yaml安装,还需要修改配置内容,下面中ETCD_ENDPOINTS="https://192.168.31.21:2379
"如是其它IP,还要注意修改,需要执行所有命令如下,
# ETCD 地址 ETCD_ENDPOINTS="https://192.168.31.21:2379" sed -i "s#.*etcd_endpoints:.*# etcd_endpoints: \"${ETCD_ENDPOINTS}\"#g" calico.yaml sed -i "s#__ETCD_ENDPOINTS__#${ETCD_ENDPOINTS}#g" calico.yaml # ETCD 证书信息 ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'` ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'` ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'` # 替换修改 sed -i "s#.*etcd-ca:.*# etcd-ca: ${ETCD_CA}#g" calico.yaml sed -i "s#.*etcd-cert:.*# etcd-cert: ${ETCD_CERT}#g" calico.yaml sed -i "s#.*etcd-key:.*# etcd-key: ${ETCD_KEY}#g" calico.yaml sed -i 's#.*etcd_ca:.*# etcd_ca: "/calico-secrets/etcd-ca"#g' calico.yaml sed -i 's#.*etcd_cert:.*# etcd_cert: "/calico-secrets/etcd-cert"#g' calico.yaml sed -i 's#.*etcd_key:.*# etcd_key: "/calico-secrets/etcd-key"#g' calico.yaml sed -i "s#__ETCD_CA_CERT_FILE__#/etc/kubernetes/pki/etcd/ca.crt#g" calico.yaml sed -i "s#__ETCD_CERT_FILE__#/etc/kubernetes/pki/etcd/server.crt#g" calico.yaml sed -i "s#__ETCD_KEY_FILE__#/etc/kubernetes/pki/etcd/server.key#g" calico.yaml sed -i "s#__KUBECONFIG_FILEPATH__#/etc/cni/net.d/calico-kubeconfig#g" calico.yaml
使用calico-etcd.yaml安装,还需将/etc/kubernetes/pki/etcd/中文件拷贝到所有节点,命令如下,
[root@kube-master ~]# scp /etc/kubernetes/pki/etcd/* kube-minion:/etc/kubernetes/pki/etcd/
2)修改网络配置
[root@kube-minion ~]# vim calico.yaml
将calico.yaml文件中192.168.0.0
,修改为初始化集群时pod的10.244.0.0/16
的配置,内容如下:
- name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16"
修改保存后,在master上安装calico,命令如下:
[root@kube-minion ~]# kubectl apply -f calico.yaml
删除calico命令如下:
kubectl delete -f calico.yaml
安装完成后查看noeds状态,命令如下:
[root@kube-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kube-master Ready control-plane,master 22h v1.21.0 kube-minion Ready <none> 18h v1.21.0 [root@kube-master ~]#
注意:上面可以看到STATUS为Ready。
3)验证是否安装成功
kubectl get pods --all-namespaces
如安装成功,则所有pods应该为READY,STATUS为Running,如下,
[root@kube-master ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default hello-node 1/1 Running 0 27h kube-system calico-kube-controllers-85f9db95f7-hwngm 1/1 Running 1 28h kube-system calico-node-cl2z4 1/1 Running 1 28h kube-system calico-node-l2xzf 1/1 Running 1 28h kube-system coredns-545d6fc579-9t2fb 1/1 Running 1 31h kube-system coredns-545d6fc579-t5nj9 1/1 Running 1 31h kube-system etcd-kube-master 1/1 Running 1 31h kube-system kube-apiserver-kube-master 1/1 Running 2 31h kube-system kube-controller-manager-kube-master 1/1 Running 1 31h kube-system kube-proxy-498sm 1/1 Running 1 28h kube-system kube-proxy-jbj27 1/1 Running 1 31h kube-system kube-scheduler-kube-master 1/1 Running 1 31h
相关文档 :Kubernetes(K8s) 使用kubeadm reset重置后 kubeadm init 失败的解决方法
6、防火墙配置
非生产环境可以关闭防火墙,生产正式环境如需打开防火墙,则需要开放以下端口:
服务器角色 | 端口 |
etcd | 2379、2380 |
Master | 6443、8472 |
Node | 8472 |
LB | 8443 |