安装 k8s 费了不少劲,搭起来之后,节点状态一直处于 NotReady,怎么改都还是 NotReady 状态😡,无奈重新装一个干干净净新 Linux 系统,终于终于终于~搭建好了,靠北!
- 移除以前的 docker 相关包
| sudo yum remove docker \ |
| docker-client \ |
| docker-client-latest \ |
| docker-common \ |
| docker-latest \ |
| docker-latest-logrotate \ |
| docker-logrotate \ |
| docker-engine |
- 配置 yum 源
| sudo yum install -y yum-utils |
| sudo yum-config-manager --add-repo |
| http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo |
- 安装 docker
| yum install -y docker-ce docker-ce-cli containerd.io |
| |
| |
| yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6 |
版本需要一致,不然可能会有问题
- 启动 docker
| systemctl enable docker --now |
- 配置加速
| mkdir -p /etc/docker |
| tee /etc/docker/daemon.json <<-'EOF' |
| { |
| "registry-mirrors": ["https://ud6340vz.mirror.aliyuncs.com"], |
| "exec-opts": ["native.cgroupdriver=systemd"], |
| "log-driver": "json-file", |
| "log-opts": { |
| "max-size": "100m" |
| }, |
| "storage-driver": "overlay2" |
| } |
| EOF |
| |
| systemctl daemon-reload |
| systemctl restart docker |
- 配置基础环境
我使用的是 CentOS7,机器需要 2GB 或者更多的 RAM,建议 2CPU 或更多,设置防火墙规则、缓存关闭,设置 hostname,禁用交换分区
机器 |
IP |
类型 |
k8s-master |
192.168.1.182 |
master |
k8s-node1 |
192.168.1.183 |
node1 |
设置机器的域名,>>> 主节点与子节点都要设置 <<<
| hostnamectl set-hostname master |
| hostnamectl set-hostname node1 |
| hostnamectl set-hostname nodeX |
将 SELinux 设置为 permissive 模式(相当于禁用)
| setenforce 0 |
| sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config |
关闭 swap
| swapoff -a |
| sed -ri 's/.*swap.*/#&/' /etc/fstab |
允许 iptables 检查桥接流量
| cat << EOF | sudo tee /etc/modules-load.d/k8s.conf |
| br_netfilter |
| EOF |
| |
| cat << EOF | sudo tee /etc/sysctl.d/k8s.conf |
| net.bridge.bridge-nf-call-ip6tables = 1 |
| net.bridge.bridge-nf-call-iptables = 1 |
| EOF |
| |
| sysctl --system |
上面的命令在子节点都需要设置
安装 kubelet、kubeadm、kubectl
| cat << EOF | sudo tee /etc/yum.repos.d/kubernetes.repo |
| [kubernetes] |
| name=Kubernetes |
| baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 |
| enabled=1 |
| gpgcheck=0 |
| repo_gpgcheck=0 |
| gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg |
| http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg |
| exclude=kubelet kubeadm kubectl |
| EOF |
| |
| yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes |
| |
| systemctl enable --now kubelet |
添加域名映射,需要根据自己的情况修改
| |
| cat >> /etc/hosts << EOF |
| > 192.168.1.182 master |
| > 192.168.1.183 node1 |
| > EOF |
主节点初始化
| |
| kubeadm init \ |
| --apiserver-advertise-address=192.168.1.182 \ |
| --image-repository registry.aliyuncs.com/google_containers \ |
| --kubernetes-version v1.20.9 \ |
| --service-cidr=10.96.0.0/12 \ |
| --pod-network-cidr=10.244.0.0/16 |
执行成功之后,它会返回一些内容
| Your Kubernetes control-plane has initialized successfully! |
| |
| To start using your cluster, you need to run the following as a regular user: |
| |
| mkdir -p $HOME/.kube |
| sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config |
| sudo chown $(id -u):$(id -g) $HOME/.kube/config |
| |
| Alternatively, if you are the root user, you can run: |
| |
| export KUBECONFIG=/etc/kubernetes/admin.conf |
| |
| You should now deploy a pod network to the cluster. |
| Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: |
| https://kubernetes.io/docs/concepts/cluster-administration/addons/ |
| |
| You can now join any number of control-plane nodes by copying certificate authorities |
| and service account keys on each node and then running the following as root: |
| |
| kubeadm join 192.168.1.182:6443 --token 4lx9wi.ie0zf17tqmlskrl6 \ |
| --discovery-token-ca-cert-hash sha256:28170623d9d29bf0ab7de702933b416b392c52d78d368892e8f268ee4903cd30 \ |
| --control-plane |
| |
| Then you can join any number of worker nodes by running the following on each as root: |
| |
| kubeadm join 192.168.1.182:6443 --token 4lx9wi.ie0zf17tqmlskrl6 \ |
| --discovery-token-ca-cert-hash sha256:28170623d9d29bf0ab7de702933b416b392c52d78d368892e8f268ee4903cd30 |
复制最后下面的内容,到子节点主机输入该内容,把工作节点加入集群
| kubeadm join 192.168.1.182:6443 --token 4lx9wi.ie0zf17tqmlskrl6 \ |
| --discovery-token-ca-cert-hash sha256:28170623d9d29bf0ab7de702933b416b392c52d78d368892e8f268ee4903cd30 |
在主节点根据初始化返回的信息,依次操作
| mkdir -p $HOME/.kube |
| sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config |
| sudo chown $(id -u):$(id -g) $HOME/.kube/config |
通过 kubectl 可以查看节点情况
| |
| kubectl get nodes |
| |
| |
| kubectl get pod -A |
此时节点的状态还是 NotReady,我们还需要设置网络插件,否则 node 会一直处于 NotReady 状态
| |
| kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml |
kube-flannel.yml
| --- |
| kind: Namespace |
| apiVersion: v1 |
| metadata: |
| name: kube-flannel |
| labels: |
| pod-security.kubernetes.io/enforce: privileged |
| --- |
| kind: ClusterRole |
| apiVersion: rbac.authorization.k8s.io/v1 |
| metadata: |
| name: flannel |
| rules: |
| - apiGroups: |
| - "" |
| resources: |
| - pods |
| verbs: |
| - get |
| - apiGroups: |
| - "" |
| resources: |
| - nodes |
| verbs: |
| - get |
| - list |
| - watch |
| - apiGroups: |
| - "" |
| resources: |
| - nodes/status |
| verbs: |
| - patch |
| - apiGroups: |
| - "networking.k8s.io" |
| resources: |
| - clustercidrs |
| verbs: |
| - list |
| - watch |
| --- |
| kind: ClusterRoleBinding |
| apiVersion: rbac.authorization.k8s.io/v1 |
| metadata: |
| name: flannel |
| roleRef: |
| apiGroup: rbac.authorization.k8s.io |
| kind: ClusterRole |
| name: flannel |
| subjects: |
| - kind: ServiceAccount |
| name: flannel |
| namespace: kube-flannel |
| --- |
| apiVersion: v1 |
| kind: ServiceAccount |
| metadata: |
| name: flannel |
| namespace: kube-flannel |
| --- |
| kind: ConfigMap |
| apiVersion: v1 |
| metadata: |
| name: kube-flannel-cfg |
| namespace: kube-flannel |
| labels: |
| tier: node |
| app: flannel |
| data: |
| cni-conf.json: | |
| { |
| "name": "cbr0", |
| "cniVersion": "0.3.1", |
| "plugins": [ |
| { |
| "type": "flannel", |
| "delegate": { |
| "hairpinMode": true, |
| "isDefaultGateway": true |
| } |
| }, |
| { |
| "type": "portmap", |
| "capabilities": { |
| "portMappings": true |
| } |
| } |
| ] |
| } |
| net-conf.json: | |
| { |
| "Network": "10.244.0.0/16", |
| "Backend": { |
| "Type": "vxlan" |
| } |
| } |
| --- |
| apiVersion: apps/v1 |
| kind: DaemonSet |
| metadata: |
| name: kube-flannel-ds |
| namespace: kube-flannel |
| labels: |
| tier: node |
| app: flannel |
| spec: |
| selector: |
| matchLabels: |
| app: flannel |
| template: |
| metadata: |
| labels: |
| tier: node |
| app: flannel |
| spec: |
| affinity: |
| nodeAffinity: |
| requiredDuringSchedulingIgnoredDuringExecution: |
| nodeSelectorTerms: |
| - matchExpressions: |
| - key: kubernetes.io/os |
| operator: In |
| values: |
| - linux |
| hostNetwork: true |
| priorityClassName: system-node-critical |
| tolerations: |
| - operator: Exists |
| effect: NoSchedule |
| serviceAccountName: flannel |
| initContainers: |
| - name: install-cni-plugin |
| image: docker.io/flannel/flannel-cni-plugin:v1.1.2 |
| |
| command: |
| - cp |
| args: |
| - -f |
| - /flannel |
| - /opt/cni/bin/flannel |
| volumeMounts: |
| - name: cni-plugin |
| mountPath: /opt/cni/bin |
| - name: install-cni |
| image: docker.io/flannel/flannel:v0.20.2 |
| |
| command: |
| - cp |
| args: |
| - -f |
| - /etc/kube-flannel/cni-conf.json |
| - /etc/cni/net.d/10-flannel.conflist |
| volumeMounts: |
| - name: cni |
| mountPath: /etc/cni/net.d |
| - name: flannel-cfg |
| mountPath: /etc/kube-flannel/ |
| containers: |
| - name: kube-flannel |
| image: docker.io/flannel/flannel:v0.20.2 |
| |
| command: |
| - /opt/bin/flanneld |
| args: |
| - --ip-masq |
| - --kube-subnet-mgr |
| resources: |
| requests: |
| cpu: "100m" |
| memory: "50Mi" |
| securityContext: |
| privileged: false |
| capabilities: |
| add: ["NET_ADMIN", "NET_RAW"] |
| env: |
| - name: POD_NAME |
| valueFrom: |
| fieldRef: |
| fieldPath: metadata.name |
| - name: POD_NAMESPACE |
| valueFrom: |
| fieldRef: |
| fieldPath: metadata.namespace |
| - name: EVENT_QUEUE_DEPTH |
| value: "5000" |
| volumeMounts: |
| - name: run |
| mountPath: /run/flannel |
| - name: flannel-cfg |
| mountPath: /etc/kube-flannel/ |
| - name: xtables-lock |
| mountPath: /run/xtables.lock |
| volumes: |
| - name: run |
| hostPath: |
| path: /run/flannel |
| - name: cni-plugin |
| hostPath: |
| path: /opt/cni/bin |
| - name: cni |
| hostPath: |
| path: /etc/cni/net.d |
| - name: flannel-cfg |
| configMap: |
| name: kube-flannel-cfg |
| - name: xtables-lock |
| hostPath: |
| path: /run/xtables.lock |
| type: FileOrCreate |
指定安装 flannel
| kubectl apply -f kube-flannel.yml |
然后再查看节点状态,可以多查询几次,成功后,Status 都会变为 Running
| kubectl get pod -A |
| |
| NAMESPACE NAME READY STATUS RESTARTS AGE |
| kube-flannel kube-flannel-ds-nws4c 1/1 Running 0 33m |
| kube-flannel kube-flannel-ds-wbbsn 1/1 Running 0 33m |
| kube-system coredns-7f89b7bc75-2v75b 1/1 Running 0 41m |
| kube-system coredns-7f89b7bc75-7l5zz 1/1 Running 0 41m |
| kube-system etcd-master 1/1 Running 0 41m |
| kube-system kube-apiserver-master 1/1 Running 0 41m |
| kube-system kube-controller-manager-master 1/1 Running 0 41m |
| kube-system kube-proxy-6wk66 1/1 Running 0 41m |
| kube-system kube-proxy-bbmkg 1/1 Running 0 35m |
| kube-system kube-scheduler-master 1/1 Running 0 41m |
| |
| |
| kubectl get nodes |
| |
| NAME STATUS ROLES AGE VERSION |
| master Ready control-plane,master 43m v1.20.9 |
| node1 Ready <none> 36m v1.17.0 |
当节点状态为 Ready 时就完成了~~~