“使用kubeadm离线部署kubernetesv1.9.0 on centos7”的版本间的差异

来自linux中国网wiki
跳到导航 跳到搜索
docker>Evan
 
(导入1个版本)

2019年10月14日 (一) 13:55的版本

全局翻墙

centos7 https://www.jianshu.com/p/1cb70b8ea2d7

这个直接用gcr.io 暂时还在用 还没成功呢 Fri May 24 16:24:42 CST 2019 成功了 docker代理

CentOS7.x安装配置Shadowsocks客户端

Ubuntu利用shadowsocks和polipo终端翻墙

docker registry mirrorsK8s镜像

用了不成功呢 有空再试一下 K8s镜像

使用kubeadm离线部署kubernetes v1.9.0

以下就是这种办法

pre

关闭swap 关闭防火墙 selinux

docker添加registry-mirror


info

os: centos 7.x
ip:
2018k8smaster 2018k8snode1 2018k8snode2
192.168.88.117    master
192.168.88.118      slave
192.168.88.119     slave



#hosts
cat >>/etc/hosts <<EOF
192.168.88.30  master
192.168.88.31  node1
192.168.88.32  node2
EOF

主机时间同步

systemctl  start chronyd.service && systemctl  enable chronyd.service

关闭 swap

swapoff -a  # 打开文件

永久修改主机名,你可以修改静态主机名

hostnamectl --static set-hostname  master
hostnamectl --static set-hostname  node1
hostnamectl --static set-hostname  node2

所有节点操作

confing ssh key and stop firewall stop selinux

 systemctl stop firewalld && systemctl disable firewalld #如果是在外网环境,请打开iptables etc 


 setenforce 0
 cat /etc/selinux/config | grep -v ^#
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

启用ipvs 内核模块


安装docker-ce

 sudo yum install -y yum-utils   device-mapper-persistent-data   lvm2
sudo yum-config-manager     --add-repo     https://download.docker.com/linux/centos/docker-ce.repo 
yum install docker-ce -y
利用阿里云安装docker-ce

CentOS7.x安装配置Shadowsocks客户端终端翻墙

docker 代理


#不要少了开头的service 还要记得check一个代理成功不
#mkdir -p /etc/systemd/system/docker.service.d
#vi /etc/systemd/system/docker.service.d/http-proxy.conf

vi /usr/lib/systemd/system/docker.service

[Service]
Environment="HTTPS_PROXY=http://127.0.0.1:8188/" "HTTP_PROXY=http://127.0.0.1:8188/" "NO_PROXY=localhost,127.0.0.1,192.168.88.30,192.168.88.31,192.168.88.32,10.96.0.0,10.224.0.0"

#Environment="HTTP_PROXY=http://proxy.example.com:80/" "HTTPS_PROXY=http://proxy.example.com:80/""NO_PROXY=localhost,127.0.0.1,docker-registry.somecorporation.com"

systemctl daemon-reload
systemctl restart docker 
systemctl enable docker
systemctl status  docker
systemctl show --property=Environment docker


other
evan@k8s-master:~$ sudo systemctl enable docker 
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docke

docker 配置

#[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

建议docker  也为 cgroupfs  和kube 一致

所有节点
## Create /etc/docker directory.
mkdir /etc/docker

# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=cgroupfs"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl restart docker

https://kubernetes.io/docs/setup/production-environment/container-runtimes/

aliyun maybe ok

CentOS / RHEL / Fedora

#在所有节点上
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

setenforce 0
yum install -y  kubelet kubeadm kubectl kubernetes-cni #docker
systemctl enable docker && systemctl start docker
systemctl enable kubelet
#systemctl start kubelet #这个要在下面改配置 不然有时启动不了

 sudo usermod -aG docker  `whoami`

install start

pass 对所以节点的操作

下载相关软件包 docker/k8s/k8s_images.tar.bz2

md5sum k8s_images.tar.bz2 

b60ad6a638eda472b8ddcfa9006315ee k8s_images.tar.bz2

tar xvf k8s_images.tar.bz2 && cd k8s_images

pass安装docker-ce,解决依赖

rpm -ivh libtool-ltdl-2.4.2-22.el7_3.x86_64.rpm  libxml2-python-2.9.1-6.el7_2.3.x86_64.rpm  libseccomp-2.3.1-3.el7.x86_64.rpm 
yum install -y  policycoreutils-python
rpm -ivh docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm &&  rpm -ivh docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm 

修改docker的镜像源为国内的daocloud

curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://5a71e0d8.m.daocloud.io

启动docker,并设置开机启动

 systemctl start docker && systemctl enable docker

配置系统路由参数,防止kubeadm报路由警告

echo "net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
" >> /etc/sysctl.conf
sysctl -p


pass 安装kubadm kubelet kubectl

rpm -ivh kubectl-1.9.0-0.x86_64.rpm kubeadm-1.9.0-0.x86_64.rpm kubelet-1.9.9-9.x86_64.rpm  \
kubernetes-cni-0.6.0-0.x86_64.rpm socat-1.7.3.2-2.el7.x86_64.rpm 


pass 加载离线docker镜像

 cd docker_images/
  for image in `ls -l . |awk '{print $9}'`;do echo "$image is loading"&&docker load < ${image};done

master 节点操作

启动kubelet and 初始化master节点

#systemctl start kubelet&&  systemctl enable kubelet.service
 
启动不了
原来是kubelet 的cgroup dirver 与 docker的不一样。docker默认使用cgroupfs,keubelet 默认使用systemd。

vi /usr/lib/systemd/system/kubelet.service
[Service]
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

#这个老的版本
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 
[Service]
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

#auto
 sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload && systemctl restart kubelet &&  systemctl enable kubelet.service

init master

#开始初始化
kubeadm init   --apiserver-advertise-address=192.168.88.30  --pod-network-cidr=10.224.0.0/16 # --apiserver-advertise-address=masterip


另外有一个小技巧,在init的过程中,另开一个终端,运行

journalctl -f -u kubelet.service

可以查看具体是什么愿意卡住了


成功的话 如下

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf



You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.88.30:6443 --token lebi4u.ja4kqi7ly89qzlpe \
    --discovery-token-ca-cert-hash sha256:5cedf4ddfd61c549e5d926e6041a5e29272c7a253c8d4bcae9d189ea6745c867 



#psss

kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.224.0.0/16

kubeadm init --kubernetes-version=v1.9.0  --apiserver-advertise-address=192.168.88.21  --pod-network-cidr=10.224.0.0/16


systemctl start kubelet&&  systemctl enable kubelet.service

calico 网络

#如果用第二种pod 网络

kubeadm init   --apiserver-advertise-address=192.168.88.30  --pod-network-cidr=192.168.0.0/16

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.88.30:6443 --token zwznuv.mpjlc3wd2crtmzh9 \
    --discovery-token-ca-cert-hash sha256:2b10a8586ed7dc82d48369906329ad63dffac146c10238a18d327652ef343a65 



kubectl apply -f https://docs.projectcalico.org/v3.7/manifests/calico.yaml

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

kubectl 配置

master 其它配置
如果是线上 建议用普通用户 这里用root 

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config


[root@master ~]# kubectl  get cs 
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
[root@master ~]# kubectl  get nodes 
NAME     STATUS     ROLES    AGE     VERSION
master   NotReady   master   8m56s   v1.15.0


k8s reset


#在天朝 因为你懂的 有时网络有问题,老是连不上,一次就init 成功很少见,于是就有了reset你得懂
#小心哦 重置
kubeadm reset
rm  -rf  /var/lib/etcd/*

移除节点

#on master
kubectl drain node1  --delete-local-data --force --ignore-daemonsets
 kubectl delete node node1


#on 节点
[root@node2 ~]# kubeadm  reset

7.配置kubectl认证信息

 cat  /etc/sudoers.d/evan
echo 'evan ALL=(ALL) NOPASSWD:NOPASSWD:ALL' > /etc/sudoers.d/evan

su - evan 
mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo "source <(kubectl completion bash)" >> ~/.bashrc
exit 

# 对于root用户 这省不能少 不然  #  kubectl  apply -f kube-flannel.yml  The connection to the server localhost:8080 was refused - did you specify the right host or port?

export KUBECONFIG=/etc/kubernetes/admin.conf
#也可以直接放到~/.bash_profile
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile


kubeadm join xxxx 可以保留下来,如果忘记了,可以通过kubeadm token list 获取

安装网络

注:该小节仅在Master节点上执行 节点可用使用flannel、macvlan、calico、weave,这里我们使用flannel

下载此文件

#download the yml file
wget  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml



#这是新版本,直接安装 不下载yml文件 有些老版本要两个文件
kubectl apply -f  wget  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

[root@master tmp]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

#第一次不小心用了0.9这个老的版本 会造成 coredns一直不成功




一般不用改的
若要修改网段,需要kubeadm –pod-network-cidr=和这里同步,修改network项。

vim kube-flannel.yml

net-conf.json: |
      {
      "Network": "10.244.0.0/16",
     "Backend": {
        "Type": "vxlan"
      }
    }




node join

  kubeadm join 192.168.88.30:6443 --token 5l64r8.j9fyewgp28gzvcdb \
    --discovery-token-ca-cert-hash sha256:0802f5d6e097a834c70fbf6012b9c66cbe1c17fd13b62562aa62d74a80bd4c49 

--ignore-preflight-errors=Swap #出于为操作系统及其它应用保留swap分区之目的


查看所pod状态,过一下子已经都running

kubectl get nodes#节点状态查看


[root@master docker_images]#  kubectl get pod --all-namespaces
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
kube-system   etcd-master                      0/1       Pending   0          1s
kube-system   kube-apiserver-master            1/1       Running   0          0s
kube-system   kube-controller-manager-master   0/1       Pending   0          0s
kube-system   kube-dns-6f4fd4bdf-r6w6q         0/3       Pending   0          9m
kube-system   kube-flannel-ds-x5xqw            1/1       Running   0          9s
kube-system   kube-proxy-69q7f                 1/1       Running   0          9m
kube-system   kube-scheduler-master            0/1       Pending   0          0s
[root@master docker_images]#  kubectl get pod --all-namespaces
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
kube-system   etcd-master                      1/1       Running   0          54s
kube-system   kube-apiserver-master            1/1       Running   0          53s
kube-system   kube-controller-manager-master   1/1       Running   0          53s
kube-system   kube-dns-6f4fd4bdf-r6w6q         3/3       Running   0          10m
kube-system   kube-flannel-ds-x5xqw            1/1       Running   0          1m
kube-system   kube-proxy-69q7f                 1/1       Running   0          10m
kube-system   kube-scheduler-master            1/1       Running   0          53s


get 集群状态信息

[root@master tmp]# kubectl  cluster-info 
Kubernetes master is running at https://192.168.88.30:6443
KubeDNS is running at https://192.168.88.30:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

[root@master tmp]# kubectl  version --short=true
Client Version: v1.15.0
Server Version: v1.15.0

pass本涉及到安装的镜像

 #大家可以自己下载回来,然后本地load 更加好的是导出成为tar    p27
gcr.io/google_containers/kube-proxy-amd64:v1.9.0
gcr.io/google_containers/kube-apiserver-amd64:v1.9.0
gcr.io/google_containers/kube-controller-manager-amd64:v1.9.0
gcr.io/google_containers/kube-scheduler-amd64:v1.9.0
quay.io/coreos/flannel:v0.9.1-amd64
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
gcr.io/google_containers/etcd-amd64:3.1.10
gcr.io/google_containers/pause-amd64:3.0

gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.1
gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3
gcr.io/google_containers/heapster-grafana-amd64:v4.4.3
gcr.io/google_containers/heapster-amd64:v1.4.2

Kubernetes相关国外镜像离线

测试集群

#这个不成功呢 在ubuntu18.04 成功的
在master节点上发起个创建应用请求 
这里我们创建个名为httpd-app的应用,镜像为httpd,有两个副本pod

kubectl run httpd-app --image=httpd --replicas=2

[root@master ~]#  kubectl get deployment
NAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
httpd-app   2         2         2            0           24s
[root@master ~]# kubectl get pods -o wide
NAME                         READY     STATUS              RESTARTS   AGE       IP        NODE
httpd-app-5fbccd7c6c-jq2bh   0/1       ContainerCreating   0          1m        <none>    node2
httpd-app-5fbccd7c6c-q4jcz   0/1       ContainerCreating   0          1m        <none>    node1

因为创建的资源不是service所以不会调用kube-proxy
直接访问测试

测试不成功呀
[root@k8sm ~]#  kubectl get services kubernetes-dashboard -n kube-system
NAME                   TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes-dashboard   ClusterIP   10.98.65.86   <none>        443/TCP   16h
[root@k8sm ~]# kubectl  get pods -o wide 
NAME                               READY     STATUS    RESTARTS   AGE       IP           NODE
httpd-app-5fbccd7c6c-54w56         1/1       Running   0          1d        10.224.1.2   k8sn1
httpd-app-5fbccd7c6c-55796         1/1       Running   0          1d        10.224.2.5   k8sn2
nginx-deployment-d5655dd9d-d5pns   1/1       Running   0          1d        10.224.2.6   k8sn2
nginx-deployment-d5655dd9d-w8jcn   1/1       Running   0          1d        10.224.1.3   k8sn1
[root@k8sm ~]# curl 10.224.1.2
^C
[root@k8sm ~]# ping  10.224.1.2
PING 10.224.1.2 (10.224.1.2) 56(84) bytes of data.
^C
--- 10.224.1.2 ping statistics ---
123 packets transmitted, 0 received, 100% packet loss, time 122000ms

参考


移除节点

重新生成token

troubeshooting


安装k8s
https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/../../pool/7143f62ad72a1eb1849d5c1e9490567d405870d2c00ab2b577f1f3bdf9f547ba-kubeadm-1.15.0-0.x86_64.rpm: [Errno -1] 软件包与预期下载的不符。建议:运行 yum --enablerepo=kubernetes clean metadata
正在尝试其它镜像。

不要翻墙 DNS改为ali docker不代理 就好了



[root@master ~]# kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

#少了这步
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config










k8s init err
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 127.0.0.1:8188: connect: connection refused
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`


docker  代理端口是8118 不是8188



kubelet服务启动不了?
cgroup driver配置要相同

查看docker cgroup driver:

docker info|grep Cgroup
有systemd和cgroupfs两种,把kubelet service配置改成与docker一致

#kubelet 15的写法
vi /usr/lib/systemd/system/kubelet.service
[Service]
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

#这个可能是老版本的写法
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs #这个配置与docker改成一致

systemctl daemon-reload && systemctl restart kubelet &&  systemctl enable kubelet.service

初始化不成功 解决办法如上的kubelet服务启动不了
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'


问题原因 
token失效被删除。在Master上查看token,结果为空。
kubeadm token list
解决方法 
重新生成token,默认token有效期为24小时,生成token时通过指定--ttl 0可设置token永久有效。
[root@master ~]# kubeadm token create --ttl 0
3a536a.5d22075f49cc5fb8
[root@master ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION   EXTRA GROUPS
3a536a.5d22075f49cc5fb8   <forever>   <never>                     authentication,signing   <none>        system:bootstrappers:kubeadm:default-node-token


remove  docker-io
 yum remove docker*

yum 阿里云也是安装不上的了啦 init 要连接到k8s.gcr.io
root@master ~]# kubeadm init
I0522 15:45:12.888481    9523 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)


warning on node 
[root@node1 ~]# kubeadm join 192.168.88.30:6443 --token 5l64r8.j9fyewgp28gzvcdb     --discovery-token-ca-cert-hash sha256:0802f5d6e097a834c70fbf6012b9c66cbe1c17fd13b62562aa62d74a80bd4c49
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/



 cat  /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1"

由于 kubelet 默认的驱动就是cgroupfs,所以只有CRI的cgroup driver不是cgroupfs时才需要指定(k8s推荐docker的cgroup driver配置为systemd)
但是这想init 有问题



谷歌k8s.gcr.io镜像快速传入阿里云镜像源的解决方案(需浏览器科学上网)


k8s安装常见问题

进阶

下一步 搞自己的离线images

Centos7.x + Kubernetes-1.12.3 + Dashboard-1.8.3的master、node节点全自动快速一键安装部署

安装k8s Master高可用集群

使用Kubeadm搭建Kubernetes(1.12.2)集群


通过 Service 访问 Pod - 每天5分钟玩转 Docker 容器技术(136)

纯手工搭建k8s集群-(一)预先准备环境

纯手工搭建k8s集群-(二)核心模块部署

纯手工搭建k8s集群-(三)认证授权和服务发现

手工 Ubuntu 16.04下搭建kubernetes集群环境

在CentOS上各种方式安装kubernetes详细指南

(一)超详细纯手工搭建kubernetes(k8s)集群 - 预先准备环境

手动搭建高可用的kubernetes 集群

Kubernetes 純手作部署在 Ubuntu 16.04

(一)超详细纯手工搭建kubernetes(k8s)集群 - 预先准备环境

手动搭建高可用的kubernetes 集群

一步步打造基于Kubeadm的高可用Kubernetes集群-第一部分

Kubernetes v1.10.x HA 全手动安装教程(TL;DR)


kubeadm HA master(v1.14.0)离线包 + 自动化脚本 + 常用插件 For Centos

see also

使用kubeadm重新初始化kubernetes集群V1.10.0


离线 Calico网络 从零开始搭建Kubernetes集群(三、搭建K8S集群)


CentOS-7使用kubeadm安装配置k8s(kubernetes)


ansible部署kubernetesv1.6.0

Docker 问答录(100 问)


从零开始搭建Kubernetes集群(一、开篇)

使用kubeadm离线部署kubernetes v1.9.0

官方文档

官方中文文档

kubeadm-init use configfile

k8s网络插件cni

部署 k8s Cluster(上)- 每天5分钟玩转 Docker 容器技术(118)

国内使用kubeadm 在Centos7搭建Kubernetes 集群 have Kubernetes Dashboard

kubeadm 搭建 kubernetes 集群

感觉意义不大 原生加速中国区Kubernetes安装

如何下载 Kubernetes 镜像和 rpm


安装 Kubernetes 二三事

国内使用 kubeadm 在 Centos 7 搭建 Kubernetes 集群

官方文档


k8s入门文档

使用kubeadm安装Kubernetes v1.10以及常见问题解答

Kubernetes集群多种搭建

Kubernetes Handbook

Kubetenets 1.9 离线安装

使用kubeadm安装Kubernetes 1.6

国内使用kubernetes踩过的坑

good使用kubeadm在CentOS 7上安装Kubernetes 1.8

官方文档


k8s入门文档

使用kubeadm安装Kubernetes v1.10以及常见问题解答

Kubernetes集群多种搭建

Kubernetes Handbook


解决gcr.io/google_container/***镜像下载失败的解决方案 不过没试过

gcr.io 仓库代理

如何在国内愉快的安装 Kubernetes

K8S集群搭建,并部署nginx实现跨网络访问