“Install and Configure Kubernetes (k8s) on debian10”的版本间的差异

来自linux中国网wiki
跳到导航 跳到搜索
(创建页面,内容为“ Install and Configure Kubernetes (k8s) on ubuntu [https://zhuanlan.zhihu.com/p/83254020 Debian 10中部署Kubernetes] [https://www.cnblogs.com/xuxinkun/p/110…”)
 
 
(未显示同一用户的51个中间版本)
第1行: 第1行:
 +
= 思路2020=
 +
先在围墙外的机器 pull下来  然后 push到自己的hub.docker  最后在内网的机器再pull 下来  再tag一下 
 +
 +
参考一下 然后写成脚本吧
 +
[https://blog.csdn.net/shykevin/article/details/98811021 ubuntu 使用阿里云镜像源快速搭建kubernetes 1.15.2集群]
 +
 +
初始化时 指定aliyun  mirrors  本来是指定 1。17。1版本的 我改了新的
 +
    kubeadm init --apiserver-advertise-address=192.168.11.184 --image-repository registry.aliyuncs.com/google_containers --ignore-preflight-errors=all  --kubernetes-version v1.17.3 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16
 +
 +
 +
Problem 3: 一个小尾巴,关闭版本探测
 +
 +
kubeadm init --kubernetes-version=v1.11.3
 +
 +
否则kubeadm会访问一个墙外的文件,找这个版本, 也会卡住。
 +
 +
然后就可以愉快的玩k8s了,真呀嘛真好用,不浪费这一番折腾。
 +
 +
 +
pull images 后还init还不成功的原因
 +
程序会访问https://dl.k8s.io/release/stable-1.txt获取最新的k8s版本,访问这个连接需要FQ,如果无法访问,则会使用kubeadm client的版本作为安装的版本号,使用kubeadm version查看client版本。也可以使用--kubernetes-version明确指定版本
 +
 +
=[[Docker国内镜像的配置及使用]]=
 +
 +
=images 准备 =
 +
<pre>
 +
#不过如果用aliyun mirrors 应该也不用理这个的 初始化是会自己拉
 +
root@k8s-master:~# kubeadm config images list
 +
W0304 10:05:03.567343  26153 validation.go:28] Cannot validate kube-proxy config - no validator is available
 +
W0304 10:05:03.567442  26153 validation.go:28] Cannot validate kubelet config - no validator is available
 +
k8s.gcr.io/kube-apiserver:v1.17.3
 +
k8s.gcr.io/kube-controller-manager:v1.17.3
 +
k8s.gcr.io/kube-scheduler:v1.17.3
 +
k8s.gcr.io/kube-proxy:v1.17.3
 +
k8s.gcr.io/pause:3.1
 +
k8s.gcr.io/etcd:3.4.3-0
 +
k8s.gcr.io/coredns:1.6.5
 +
 +
由上面的 list 得知
 +
好办,我们先找台海外服务器,把相应的镜像拉下来,推到我们自己的私有仓库里,再pull,然后改tag。没有私有仓库也不要紧,我已经把1.15.1推到hub.docker.com了。
 +
 +
找一台能连接k8s.gcr.io的服务器:
 +
 +
docker pull k8s.gcr.io/kube-apiserver:v1.17.3
 +
docker pull  k8s.gcr.io/kube-controller-manager:v1.17.3
 +
docker pull  k8s.gcr.io/kube-scheduler:v1.17.3
 +
docker pull  k8s.gcr.io/kube-proxy:v1.17.3
 +
docker pull  k8s.gcr.io/pause:3.1
 +
docker pull  k8s.gcr.io/etcd:3.4.3-0
 +
docker pull  k8s.gcr.io/coredns:1.6.5
 +
 +
 +
docker login
 +
evan886  evan2240881
 +
 +
docker tag  k8s.gcr.io/kube-apiserver:v1.17.3    evan886/kube-apiserver:v1.17.3
 +
 +
docker push  evan886/kube-apiserver:v1.17.3
 +
 +
 +
docker tag  k8s.gcr.io/kube-controller-manager:v1.17.3 evan886/kube-controller-manager:v1.17.3
 +
docker tag  k8s.gcr.io/kube-scheduler:v1.17.3  evan886/kube-scheduler:v1.17.3 
 +
 +
 +
 +
 +
docker tag  k8s.gcr.io/kube-proxy:v1.17.3 evan886/kube-proxy:v1.17.3
 +
docker tag  k8s.gcr.io/pause:3.1 evan886/pause:3.1
 +
docker tag  k8s.gcr.io/etcd:3.4.3-0  evan886/etcd:3.4.3-0
 +
docker tag  k8s.gcr.io/coredns:1.6.5  evan886/coredns:1.6.5
 +
 +
#push 自己的hub.docker
 +
docker push  evan886/kube-apiserver:v1.17.3
 +
docker push  evan886/kube-controller-manager:v1.17.3 
 +
docker push evan886/kube-scheduler:v1.17.3
 +
docker push evan886/kube-proxy:v1.17.3
 +
docker push evan886/pause:3.1
 +
docker push  evan886/etcd:3.4.3-0
 +
docker push evan886/coredns:1.6.5
 +
 
 +
 
 +
  #on k8s master
 +
docker pull evan886/etcd:3.4.3-0
 +
docker pull  evan886/coredns:1.6.5
 +
docker pull evan886/kube-proxy:v1.17.3
 +
 
 +
 
 +
  再tag 回k8s.gcr.io
 +
 
 +
docker tag  evan886/kube-proxy:v1.17.3  k8s.gcr.io/kube-proxy:v1.17.3
 +
docker tag  evan886/etcd:3.4.3-0  k8s.gcr.io/etcd:3.4.3-0 
 +
docker tag  evan886/coredns:1.6.5  k8s.gcr.io/coredns:1.6.5 
 +
 +
 +
 +
可优化的脚本
 +
运行脚本是这样的:
 +
 +
MY_REGISTRY=registry.cn-hangzhou.aliyuncs.com/google_containers
 +
#registry.cn-hangzhou.aliyuncs.com/google-images
 +
VERSION=v1.11.3
 +
 +
## 拉取镜像
 +
docker pull ${MY_REGISTRY}/kube-apiserver-amd64:${VERSION}
 +
docker pull ${MY_REGISTRY}/kube-controller-manager-amd64:${VERSION}
 +
docker pull ${MY_REGISTRY}/kube-scheduler-amd64:${VERSION}
 +
docker pull ${MY_REGISTRY}/kube-proxy-amd64:${VERSION}
 +
docker pull ${MY_REGISTRY}/etcd-amd64:3.2.18
 +
docker pull ${MY_REGISTRY}/pause-amd64:3.1
 +
docker pull ${MY_REGISTRY}/coredns:1.1.3
 +
docker pull ${MY_REGISTRY}/pause:3.1
 +
 +
## 添加Tag
 +
docker tag ${MY_REGISTRY}/kube-apiserver-amd64:${VERSION} k8s.gcr.io/kube-apiserver-amd64:${VERSION}
 +
docker tag ${MY_REGISTRY}/kube-scheduler-amd64:${VERSION} k8s.gcr.io/kube-scheduler-amd64:${VERSION}
 +
docker tag ${MY_REGISTRY}/kube-controller-manager-amd64:${VERSION} k8s.gcr.io/kube-controller-manager-amd64:${VERSION}
 +
docker tag ${MY_REGISTRY}/kube-proxy-amd64:${VERSION} k8s.gcr.io/kube-proxy-amd64:${VERSION}
 +
docker tag ${MY_REGISTRY}/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18
 +
docker tag ${MY_REGISTRY}/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
 +
docker tag ${MY_REGISTRY}/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3
 +
docker tag ${MY_REGISTRY}/pause:3.1 k8s.gcr.io/pause:3.1
 +
 +
 +
 +
 +
        不同的版本需要特定version的image,如果长期跟踪kubeadm和kubectl,要注意维护这个image列表
 +
        如果使用代理方案,注意 http_proxy=<proxy address>:<proxy port> docker pull 并不能生效,而是要让docker daemon感知到proxy的存在。这是一个坑点,但不是docker的设计缺陷,而是image pull的操作是docker服务进程管理的,当然代理要让这个进程使用。
 +
 +
 +
 
 +
</pre>
 +
 +
=info=
 +
<pre>
 +
cat >>/etc/hosts <<EOF
 +
192.168.11.184  k8s-master
 +
192.168.88.31  k8s-node1
 +
192.168.88.32  k8s-node2
 +
EOF
 +
 +
 +
每台机器最少2GB内存,2CPUs。
 +
集群中所有机器之间网络连接正常。
 +
打开相应的端口,详见: [ Check required ports https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports]
 +
 +
 +
Kubernetes要求集群中所有机器具有不同的Mac地址、产品uuid、Hostname。可以使用如下命令查看:
 +
 +
# UUID
 +
cat /sys/class/dmi/id/product_uuid
 +
 +
# Mac地址
 +
ip link
 +
 +
</pre>
 +
 +
=FQ=
 +
[[Ubuntu利用shadowsocks和polipo终端翻墙]]
 +
 +
=pre=
 +
搞个离线的吧 可以先学习着
 +
 +
[[K8s镜像]]
 +
 +
[[使用kubeadm离线部署kubernetesv1.9.0]]
 +
 +
=Set Hostname and update hosts file=
 +
<pre>
 +
sudo hostnamectl set-hostname "k8s-master"
 +
sudo hostnamectl set-hostname k8s-node1
 +
sudo hostnamectl set-hostname k8s-node2
 +
 +
#Add the following lines in /etc/hosts file on all three systems,
 +
 +
</pre>
 +
 +
=翻墙=
 +
[[Debian利用shadowsocks和polipo终端代理翻墙]]
 +
<pre>
 +
 +
cat /etc/profile #最好就是这个写成 Privoxy 先安装在kali就行了 (polipo可能放弃 )那台机器的IP就行了,其它机器就不用再搞 ss  polipo了
 +
#这 这里填写  polipo那台机器的ip
 +
export http_proxy="http://PrivoxyIP:8118/"
 +
export https_proxy=$http_proxy
 +
#export no_proxy="localhost,127.0.0.1,192.168.88.58,10.96.0.0,10.224.0.0"
 +
export no_proxy="localhost,127.0.0.1,192.168.88.58,10.96.0.0,10.224.0.0,10.224.*"
 +
</pre>
 +
 +
如果不想翻墙 请参考[https://www.cnblogs.com/RainingNight/p/using-kubeadm-to-create-a-cluster-1-12.html 使用Kubeadm搭建Kubernetes(1.12.2)集群]
 +
 +
=ins docker=
 +
#docker-compose 直接用官方的二进制包
 +
 +
==debian9 or 10==
 +
 +
===.Install using the repository on debian ===
 +
<pre>
 +
#Aug 17 2021
 +
 +
sudo apt-get install    apt-transport-https    ca-certificates    curl    gnupg    lsb-release
 +
 +
  curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
 +
echo  "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
 +
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
 +
  sudo apt-get install docker-ce docker-ce-cli containerd.io
 +
 +
 +
sudo apt-get install docker-ce docker-ce-cli containerd.io
 +
 +
 +
如果是kali 2020
 +
cat /etc/apt/sources.list.d/docker.list
 +
deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian  buster  stable
 +
#deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian  kali-rolling stable
 +
 +
 +
 +
#old
 +
apt  install software-properties-common
 +
 +
apt-get remove docker docker-engine docker.io containerd runc
 +
 +
sudo apt-get install \
 +
    apt-transport-https \
 +
    ca-certificates \
 +
    curl \
 +
    gnupg2 \
 +
    software-properties-common -y
 +
 +
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
 +
 +
sudo apt-key fingerprint 0EBFCD88
 +
sudo add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/debian \
 +
  $(lsb_release -cs) \
 +
  stable"
 +
apt-get update
 +
sudo apt-get install docker-ce docker-ce-cli containerd.io
 +
</pre>
 +
https://docs.docker.com/engine/install/debian/
 +
https://docs.docker.com/install/linux/docker-ce/debian/
 +
 +
===2.install-from-a-package on debian===
 +
 +
Go to https://download.docker.com/linux/debian/dists/, choose your Debian version, browse to pool/stable/, choose either amd64 or armhf, and download the .deb file for the Docker CE version you want to install.
 +
 +
I am  stretch so
 +
apt install libltdl7
 +
 +
http://mirrors.aliyun.com/docker-ce/linux/debian/dists/stretch/pool/stable/amd64/
 +
 +
 +
[[Docker入门]]
 +
 +
=docker代理设置=
 +
<pre>
 +
 +
#不要少了开头的service 还要记得check一个代理成功不  昨天就是 另外两台机器 也写了 127.0.0.1 丢人
 +
mkdir -p /etc/systemd/system/docker.service.d
 +
vi /etc/systemd/system/docker.service.d/http-proxy.conf
 +
 +
[Service]
 +
Environment="HTTPS_PROXY=http://192.168.10.158:8118/" "HTTP_PROXY=http://127.0.0.1:8118/" "NO_PROXY=localhost,127.0.0.1,192.168.88.67,10.96.0.0,10.224.0.0"
 +
#Environment="HTTPS_PROXY=http://127.0.0.1:8123/" "HTTP_PROXY=http://127.0.0.1:8123/" "NO_PROXY=localhost,127.0.0.1,192.168.88.67,10.96.0.0,10.224.0.0"
 +
 +
#Environment="HTTP_PROXY=http://proxy.example.com:80/" "HTTPS_PROXY=http://proxy.example.com:80/""NO_PROXY=localhost,127.0.0.1,docker-registry.somecorporation.com"
 +
 +
systemctl daemon-reload
 +
systemctl restart docker
 +
systemctl enable docker
 +
 +
systemctl show --property=Environment docker
 +
 +
 +
other
 +
evan@k8s-master:~$ sudo systemctl enable docker
 +
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
 +
Executing: /lib/systemd/systemd-sysv-install enable docker
 +
 +
</pre>
 +
[https://docs.docker.com/config/daemon/systemd/ docker http-proxy]
 +
 +
[https://www.jianshu.com/p/1cb70b8ea2d7 docker 代理设置]
 +
 +
[https://blog.frognew.com/2017/01/docker-http-proxy.html docker代理配置-透过代理服务器pull镜像]
 +
 +
[http://silenceper.com/blog/201809/over-the-wall-pull-docker-mirror/ docker pull 翻墙下载镜像]
 +
 +
[https://blog.csdn.net/northeastsqure/article/details/60143144 docker设置代理]
 +
 +
 +
[https://www.cnblogs.com/atuotuo/p/7298673.html docker - 设置HTTP/HTTPS 代理]
 +
 +
=ins 在所有节点上=
 +
<pre>
 +
 +
swapoff -a;  sudo usermod -a -G docker $USER
 +
 +
sudo apt update && sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
 +
 +
修改docker cgroup driver为systemd
 +
 +
根据文档CRI installation中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上docker的cgroup driver为systemd。
 +
 +
创建或修改/etc/docker/daemon.json
 +
 +
{
 +
  "exec-opts": ["native.cgroupdriver=systemd"]
 +
}
 +
 +
重启docker:
 +
 +
systemctl restart docker
 +
 +
docker info | grep Cgroup
 +
Cgroup Driver: systemd
 +
 +
 +
#国内版
 +
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
 +
deb https://mirrors.huaweicloud.com/kubernetes/apt/ kubernetes-xenial main
 +
EOF
 +
 +
 +
 +
apt  install gnupg  -y 
 +
curl -s https://mirrors.huaweicloud.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
 +
#4、更新索引文件并安装kubernetes
 +
sudo apt update
 +
sudo apt install -y kubeadm kubelet kubectl
 +
 +
 +
 +
 +
 +
 +
 +
apt-get update && apt-get install -y apt-transport-https curl
 +
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
 +
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
 +
deb https://apt.kubernetes.io/ kubernetes-xenial main
 +
EOF
 +
apt-get update
 +
apt-get install -y kubelet kubeadm kubectl
 +
apt-mark hold kubelet kubeadm kubectl
 +
 +
 +
#init  之前不要启动
 +
#systemctl start kubelet&&  systemctl enable kubelet.service
 +
 +
 +
启动不了
 +
原来是kubelet 的cgroup dirver 与 docker的不一样。docker默认使用cgroupfs,keubelet 默认使用systemd。
 +
 +
https://kubernetes.io/docs/setup/cri/
 +
 +
cat > /etc/docker/daemon.json <<EOF
 +
{
 +
  "exec-opts": ["native.cgroupdriver=systemd"],
 +
  "log-driver": "json-file",
 +
  "log-opts": {
 +
    "max-size": "100m"
 +
  },
 +
  "storage-driver": "overlay2",
 +
  "storage-opts": [
 +
    "overlay2.override_kernel_check=true"
 +
  ]
 +
}
 +
EOF
 +
 +
mkdir -p /etc/systemd/system/docker.service.d
 +
 +
systemctl daemon-reload
 +
systemctl restart docker
 +
 +
 +
 +
 +
 +
 +
 +
#这个有改的 18.04上成功了的
 +
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
 +
[Service]
 +
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
 +
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
 +
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
 +
 +
systemctl daemon-reload && systemctl restart kubelet &&  systemctl enable kubelet.service
 +
</pre>
 +
 +
=在 Master 节点上配置 kubelet 所需的 cgroup 驱动=
 +
<pre>
 +
使用 Docker 时,kubeadm 会自动为其检测 cgroup 驱动在运行时对 /var/lib/kubelet/kubeadm-flags.env 文件进行配置。
 +
如果您使用了不同的 CRI, 您得把 /etc/default/kubelet 文件中的 cgroup-driver 位置改为对应的值,像这样:
 +
 +
KUBELET_EXTRA_ARGS=--cgroup-driver=<value>
 +
 +
这个文件将会被 kubeadm init 和 kubeadm join 用于为 kubelet 获取 额外的用户参数。
 +
 +
请注意,您只需要在您的 cgroup driver 不是 cgroupfs 时这么做,因为 cgroupfs 已经是 kubelet 的默认值了。
 +
 +
systemctl daemon-reload; systemctl restart kubelet #需要重启 kubelet:
 +
 +
 +
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
 +
 +
 +
This error is likely caused by:
 +
- The kubelet is not running
 +
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
 +
#me  2020
 +
evan@k8s-master:~$ cat /var/lib/kubelet/kubeadm-flags.env
 +
KUBELET_KUBEADM_ARGS=--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --resolv-conf=/run/systemd/resolve/resolv.conf
 +
 +
</pre>
 +
 +
=初始化master=
 +
<pre>
 +
 +
#可以用国内阿里节点 不用FQ了
 +
kubeadm init --apiserver-advertise-address=192.168.11.184 --image-repository registry.aliyuncs.com/google_containers --ignore-preflight-errors=all  --kubernetes-version v1.17.1 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16
 +
 +
 +
 +
 +
 +
 +
 +
#14:25:52--14:47:55 kubelet 其实是没启动的 在init之前
 +
kubeadm init  --apiserver-advertise-address=192.168.88.30  --pod-network-cidr=10.224.0.0/16 # --apiserver-advertise-address=masterip
 +
 +
kubeadm join 192.168.88.58:6443 --token fuwhe0.ro0c8u82u4xtmn8q \
 +
    --discovery-token-ca-cert-hash sha256:83bd9c19486c44fde674f4ccf0a7382848cd7bfeff8c361d54e7a2955a4dbd60
 +
 +
 +
 +
 +
 +
Alternatively, if you are the root user, you can run:
 +
 +
export KUBECONFIG=/etc/kubernetes/admin.conf
 +
 +
 +
 +
另外有一个小技巧,在init的过程中,另开一个终端,运行
 +
 +
journalctl -f -u kubelet.service
 +
 +
可以查看具体是什么愿意卡住了
 +
 +
 +
</pre>
 +
 +
=配置kubectl认证信息=
 +
<pre>
 +
cat  /etc/sudoers.d/evan
 +
echo 'evan ALL=(ALL) NOPASSWD:NOPASSWD:ALL' > /etc/sudoers.d/evan
 +
 +
su - evan
 +
mkdir -p $HOME/.kube
 +
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 +
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 +
echo "source <(kubectl completion bash)" >> ~/.bashrc
 +
exit
 +
 +
# 对于root用户 这省不能少 不然  #  kubectl  apply -f kube-flannel.yml  The connection to the server localhost:8080 was refused - did you specify the right host or port?
 +
 +
export KUBECONFIG=/etc/kubernetes/admin.conf
 +
#也可以直接放到~/.bash_profile
 +
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile</pre>
 +
 +
=安装pod网络on master=
 +
<pre>#普通用户 不要翻墙
 +
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml </pre>
 +
 +
=添加节点=
 +
不要翻墙了 新起个窗口
 +
<pre>  # on  all node
 +
kubeadm join 192.168.88.58:6443 --token fuwhe0.ro0c8u82u4xtmn8q \
 +
    --discovery-token-ca-cert-hash sha256:83bd9c19486c44fde674f4ccf0a7382848cd7bfeff8c361d54e7a2955a4dbd60
 +
 +
 +
 +
evan@k8s-master:~$ kubectl get nodes
 +
NAME  STATUS    ROLES    AGE    VERSION
 +
k8s    NotReady  master  5h12m  v1.14.2
 +
u16    NotReady  <none>  106m    v1.14.2
 +
 +
evan@k8s-master:~$ kubectl get pod --all-namespaces
 +
NAMESPACE    NAME                          READY  STATUS              RESTARTS  AGE
 +
kube-system  coredns-fb8b8dccf-nprqq      0/1    Terminating        16        5h11m
 +
kube-system  coredns-fb8b8dccf-qn85f      0/1    Pending            0          5m4s
 +
kube-system  coredns-fb8b8dccf-sgtw4      0/1    Terminating        16        5h11m
 +
kube-system  coredns-fb8b8dccf-wsnkg      0/1    Pending            0          5m5s
 +
kube-system  etcd-k8s                      1/1    Running            0          5h11m
 +
kube-system  kube-apiserver-k8s            1/1    Running            0          5h11m
 +
kube-system  kube-controller-manager-k8s  1/1    Running            0          5h11m
 +
kube-system  kube-flannel-ds-amd64-8vvn6  0/1    Init:0/1            0          107m
 +
kube-system  kube-flannel-ds-amd64-q92vz  1/1    Running            0          112m
 +
kube-system  kube-proxy-85vkt              0/1    ContainerCreating  0          107m
 +
kube-system  kube-proxy-fr7lv              1/1    Running            0          5h11m
 +
kube-system  kube-scheduler-k8s            1/1    Running            0          5h11m
 +
 +
 +
evan@k8s-master:~$ kubectl describe pod  kube-proxy-85vkt  --namespace=kube-system
 +
Name:              kube-proxy-85vkt
 +
Namespace:          kube-system
 +
Priority:          2000001000
 +
PriorityClassName:  system-node-critical
 +
Node:              u16/192.168.88.66
 +
****
 +
 +
Events:
 +
  Type    Reason                  Age                  From              Message
 +
  ----    ------                  ----                  ----              -------
 +
  Normal  Scheduled              109m                  default-scheduler  Successfully assigned kube-system/kube-proxy-85vkt to u16
 +
  Normal  Pulling                108m                  kubelet, u16      Pulling image "k8s.gcr.io/kube-proxy:v1.14.2"
 +
  Normal  Pulled                  107m                  kubelet, u16      Successfully pulled image "k8s.gcr.io/kube-proxy:v1.14.2"
 +
  Normal  Created                107m                  kubelet, u16      Created container kube-proxy
 +
  Normal  Started                107m                  kubelet, u16      Started container kube-proxy
 +
  Warning  FailedCreatePodSandBox  52m (x119 over 107m)  kubelet, u16      Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
 +
 +
放了一个晚上 早上还是坏的 突然打开已是好的了
 +
 +
evan@ubuntu18:~$ kubectl get pod --all-namespaces
 +
NAMESPACE    NAME                              READY  STATUS    RESTARTS  AGE
 +
kube-system  coredns-fb8b8dccf-2rbwc            1/1    Running  3          18h
 +
kube-system  coredns-fb8b8dccf-67zc2            1/1    Running  3          18h
 +
kube-system  etcd-ubuntu18                      1/1    Running  10        18h
 +
kube-system  kube-apiserver-ubuntu18            1/1    Running  4          18h
 +
kube-system  kube-controller-manager-ubuntu18  1/1    Running  5          18h
 +
kube-system  kube-flannel-ds-amd64-b6bn8        1/1    Running  45        16h
 +
kube-system  kube-flannel-ds-amd64-v9wxm        1/1    Running  46        16h
 +
kube-system  kube-flannel-ds-amd64-zn4xd        1/1    Running  3          16h
 +
kube-system  kube-proxy-d7pmb                  1/1    Running  4          18h
 +
kube-system  kube-proxy-gcddr                  1/1    Running  0          16h
 +
kube-system  kube-proxy-lv8cb                  1/1    Running  0          16h
 +
kube-system  kube-scheduler-ubuntu18            1/1    Running  5          18h
 +
 +
 +
 +
master 也当作node  这里的master hostname 为 ubuntu18OB
 +
evan@ubuntu18:~$ kubectl  taint node ubuntu18 node-role.kubernetes.io/master-
 +
node/ubuntu18 untainted
 +
 +
#master only
 +
kubectl  taint node ubuntu18 node-role.kubernetes.io/master="":NoSchedule
 +
 +
</pre>
 +
 +
=master 也当作node =
 +
<pre>
 +
[root@master tomcat]# hostname
 +
master
 +
[root@master tomcat]# kubectl taint node master node-role.kubernetes.io/master-
 +
node/master untainted </pre>
 +
 +
 +
=下面的是不是可以不要翻墙了呢=
 +
 +
=chpater4  k8s architecture=
 +
<pre>
 +
#唯一不是容器形式运行的k8s 组件
 +
evan@k8s-master:~$ sudo systemctl status kubelet.service
 +
● kubelet.service - kubelet: The Kubernetes Node Agent
 +
  Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
 +
  Drop-In: /etc/systemd/system/kubelet.service.d
 +
          └─10-kubeadm.conf
 +
  Active: active (running) since Mon 2019-05-27 07:26:18 UTC; 21min ago
 +
    Docs: https://kubernetes.io/docs/home/
 +
Main PID: 817 (kubelet)
 +
    Tasks: 19 (limit: 3499)
 +
  CGroup: /system.slice/kubelet.service
 +
          └─817 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf -
 +
 +
 +
 +
在master节点上发起个创建应用请求
 +
这里我们创建个名为httpd-app的应用,镜像为httpd,有两个副本pod
 +
evan@k8s-master:~$ kubectl run httpd-app --image=httpd --replicas=2
 +
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
 +
deployment.apps/httpd-app created
 +
 +
evan@k8s-master:~$ kubectl get deployment
 +
NAME        READY  UP-TO-DATE  AVAILABLE  AGE
 +
httpd-app  0/2    2            0          103s
 +
 +
evan@k8s-master:~$ kubectl get pods -o wide
 +
NAME                        READY  STATUS              RESTARTS  AGE    IP      NODE        NOMINATED NODE  READINESS GATES
 +
httpd-app-6df58645c6-bvg9w  0/1    ContainerCreating  0          2m10s  <none>  k8s-node1  <none>          <none>
 +
httpd-app-6df58645c6-n9xdj  0/1    ContainerCreating  0          2m10s  <none>  k8s-node2  <none>          <none>
 +
 +
evan@k8s-master:~$ kubectl get pods -o wide
 +
NAME                        READY  STATUS              RESTARTS  AGE    IP          NODE        NOMINATED NODE  READINESS GATES
 +
httpd-app-6df58645c6-bvg9w  0/1    ContainerCreating  0          3m58s  <none>      k8s-node1  <none>          <none>
 +
httpd-app-6df58645c6-n9xdj  1/1    Running            0          3m58s  10.224.1.2  k8s-node2  <none>          <none>
 +
#OK了
 +
evan@k8s-master:~$ kubectl get pods -o wide
 +
NAME                        READY  STATUS    RESTARTS  AGE    IP          NODE        NOMINATED NODE  READINESS GATES
 +
httpd-app-6df58645c6-bvg9w  1/1    Running  0          6m8s  10.224.2.3  k8s-node1  <none>          <none>
 +
httpd-app-6df58645c6-n9xdj  1/1    Running  0          6m8s  10.224.1.2  k8s-node2  <none>          <none>
 +
 +
</pre>
 +
 +
=下面 关闭ss docker 代理 polipo =
 +
 +
=chapter 5 run apps=
 +
<pre>
 +
evan@k8s-master:~$ kubectl run nginx-deployment --image=nginx:1.7.9 --replicas=2
 +
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
 +
deployment.apps/nginx-deployment created
 +
 +
上面的命令将部署包含两个副本的 Deployment nginx-deployment,容器的 image 为 nginx:1.7.9。
 +
 +
等待一段时间
 +
kubectl get deployment nginx-deployment
 +
NAME              READY  UP-TO-DATE  AVAILABLE  AGE
 +
nginx-deployment  2/2    2            2          36m
 +
 +
 +
接下来我们用 kubectl describe deployment 了解更详细的信息
 +
 +
</pre>
 +
 +
=等待=
 +
<pre>
 +
sudo  sslocal -c /root/shadowsocks.json -d start
 +
sslocal -c shadowsocks.json -d start
 +
sslocal -c shadowsocks.json -d start
 +
 +
</pre>
 +
=进阶=
 +
 +
[https://blog.csdn.net/shida_csdn/article/details/83176735 K8S 源码探秘 之 kubeadm init 执行流程分析]
 +
 +
[https://blog.csdn.net/m0_37556444/article/details/86494791 kubeadm--init]
 +
 +
[https://www.jianshu.com/p/c01ba5bd1359?utm_campaign=maleskine&utm_content=note&utm_medium=seo_notes&utm_source=recommendation 安装k8s Master高可用集群]
 +
 +
=What is new=
 +
在Kubernetes 1.11中,CoreDNS已经实现了基于DNS的服务发现的GA,可作为kube-dns插件的替代品。这意味着CoreDNS将作为各种安装工具未来发布版本中的一个选项来提供。
 +
事实上,kubeadm团队选择将其作为Kubernetes 1.11的默认选项。
 +
 +
[https://blog.csdn.net/k8scaptain/article/details/81033095 CoreDNS正式GA | kube-dns与CoreDNS有何差异?]
 +
 +
[https://juejin.im/post/5b46100de51d4519105d37e3 k8s集群配置使用coredns代替kube-dns]
 +
 +
=trouble=
 +
==2020==
 +
<pre>
 +
 +
换个国内的源好了
 +
 +
 +
[kubelet-check] Initial timeout of 40s passed.
 +
 +
Unfortunately, an error has occurred:
 +
timed out waiting for the condition
 +
 +
This error is likely caused by:
 +
- The kubelet is not running
 +
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
 +
 +
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
 +
- 'systemctl status kubelet'
 +
- 'journalctl -xeu kubelet'
 +
 +
Additionally, a control plane component may have crashed or exited when started by the container runtime.
 +
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
 +
Here is one example how you may list all Kubernetes containers running in docker:
 +
- 'docker ps -a | grep kube | grep -v pause'
 +
Once you have found the failing container, you can inspect its logs with:
 +
- 'docker logs CONTAINERID'
 +
error execution phase wait-control-plane: couldn't initialize a Kubern
 +
To see the stack trace of this error execute with --v=5 or higher
 +
 +
</pre>
 +
 +
==Kubenetes服务不启动问题 ==
 +
<pre>
 +
重启系统后,发现kubelet服务没有起来,首先检查:
 +
 +
1.vim  /etc/fstab
 +
#注释掉里面的swap一行。
 +
 +
2
 +
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf 文件加入KUBELET_CGROUP_ARGS和KUBELET_EXTRA_ARGS参数,
 +
 +
 +
3.注意在启动参数中也要加入,如下:
 +
[Service]
 +
 +
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
 +
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
 +
 +
ExecStart=
 +
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_CGROUP_ARGS $KUBELET_EXTRA_ARGS
 +
 +
systemctl daemon-reload
 +
systemctl restart kubelet
 +
</pre>
 +
== trouble2 重启一下机器就坏==
 +
<pre>
 +
 +
这个如果用国内源 要指定init的版本 安装时后面有00 软件名使用时没有 00
 +
为什么重启一下机器就坏了呢
 +
 +
systemctl  status  kubelet
 +
● kubelet.service - kubelet: The Kubernetes Node Agent
 +
  Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
 +
  Drop-In: /etc/systemd/system/kubelet.service.d
 +
          └─10-kubeadm.conf
 +
  Active: activating (auto-restart) (Result: exit-code) since Fri 2019-05-24 20:27:22 CST; 1s ago
 +
    Docs: https://kubernetes.io/docs/home/
 +
  Process: 1889 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (cod
 +
Main PID: 1889 (code=exited, status=255)
 +
 +
 +
 +
kubelet.service: Main process exited, code=exited, status=255
 +
 +
 +
journalctl -xefu kubelet
 +
 +
原来是kubelet 的cgroup dirver 与 docker的不一样。docker默认使用cgroupfs,keubelet 默认使用systemd。
 +
 +
 +
简单地说就是在kubeadm init 之前kubelet会不断重启。
 +
 +
 +
[kubelet-check] Initial timeout of 40s passed.
 +
 +
Unfortunately, an error has occurred:
 +
timed out waiting for the condition
 +
 +
This error is likely caused by:
 +
- The kubelet is not running
 +
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
 +
 +
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
 +
- 'systemctl status kubelet'
 +
- 'journalctl -xeu kubelet'
 +
 +
 +
在集群初始化遇到问题,可以使用下面的命令进行清理后重新再初始化:
 +
 +
kubeadm reset
 +
ifconfig cni0 down
 +
ip link delete cni0
 +
ifconfig flannel.1 down
 +
ip link delete flannel.1
 +
rm -rf /var/lib/cni/
 +
 +
 +
</pre>
 +
 +
 +
 +
[https://segmentfault.com/q/1010000015988481 K8S 初始化问题,有哪位遇到过,求解!timed out waiting for the condition]
 +
 +
== trouble3 ==
 +
<pre>
 +
evan@k8s-master:~$ docker pull gcr.io/kubernetes-helm/tiller:v2.14.0
 +
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/create?fromImage=gcr.io%2Fkubernetes-helm%2Ftiller&tag=v2.14.0: dial unix /var/run/docker.sock: connect: permission denied
 +
 +
 +
    sudo usermod -a -G docker $USER #普通用户添加天docker 组
 +
 +
</pre>
 +
[https://www.cnblogs.com/informatics/p/8276172.html Docker pull Get Permission Denied]
 +
 +
==trouble 3 ==
 +
docker  223.6.6.6 有时有问题 建议用8.8.4.4
 +
 +
=see also=
 +
 +
 +
[http://www.jobbible.cn/2019/06/18/205/ 在国内使用阿里云镜像源搭建Kubernetes环境]
 +
 +
[https://www.jianshu.com/p/21a39ee86311?utm_campaign=maleskine&utm_content=note&utm_medium=seo_notes&utm_source=recommendation ubuntu 离线搭建Kubenetes1.9.2 集群]
 +
 +
[https://www.cnblogs.com/RainingNight/p/using-kubeadm-to-create-a-cluster-1-12.html 使用Kubeadm搭建Kubernetes(1.12.2)集群]
 +
 +
 +
 +
[https://www.debian.cn/archives/3076 Debian 9 使用kubeadm创建 k8s 集群(上)]
 +
 +
 +
[https://www.debian.cn/archives/3078 Debian 9 使用kubeadm创建 k8s 集群(下)]
 +
 +
 +
[https://www.linuxtechi.com/install-configure-kubernetes-ubuntu-18-04-ubuntu-18-10/ Install and Configure Kubernetes (k8s) 1.13 on Ubuntu 18.04 LTS / Ubuntu 18.10]
 +
 +
[https://www.kubernetes.org.cn/4387.html Ubuntu 18.04 离线安装Kubernetes v1.11.1]
 +
 +
[https://www.cnblogs.com/Leo_wl/p/8511902.html 安装部署 Kubernetes 集群]
 +
 +
https://www.kubernetes.org.cn/course/install
  
  
 
[[Install and Configure Kubernetes (k8s) on ubuntu]]
 
[[Install and Configure Kubernetes (k8s) on ubuntu]]
 +
 +
[https://my.oschina.net/Kanonpy/blog/3006129 kubernetes部署(kubeadm国内镜像源)]
  
 
[https://zhuanlan.zhihu.com/p/83254020 Debian 10中部署Kubernetes]
 
[https://zhuanlan.zhihu.com/p/83254020 Debian 10中部署Kubernetes]
  
 
[https://www.cnblogs.com/xuxinkun/p/11025020.html docker/kubernetes国内源/镜像源解决方式]
 
[https://www.cnblogs.com/xuxinkun/p/11025020.html docker/kubernetes国内源/镜像源解决方式]
 +
[https://cloud.tencent.com/developer/article/1461571 k8s常见报错解决--持续更新]
 +
 +
[https://blog.magichc7.com/post/how-to-install-kubernetes-in-China.html 如何在国内安装K8S]
 +
 +
[https://my.oschina.net/u/4657223/blog/4695879 Kubernetes动手系列:手把手教你10分钟快速部署集群]
  
=see also=
+
[[category:k8s]] [[category:容器]] [[category: container]]

2022年7月7日 (四) 06:30的最新版本

思路2020

先在围墙外的机器 pull下来 然后 push到自己的hub.docker 最后在内网的机器再pull 下来 再tag一下

参考一下 然后写成脚本吧 ubuntu 使用阿里云镜像源快速搭建kubernetes 1.15.2集群

初始化时 指定aliyun  mirrors  本来是指定 1。17。1版本的 我改了新的
   kubeadm init --apiserver-advertise-address=192.168.11.184 --image-repository registry.aliyuncs.com/google_containers --ignore-preflight-errors=all  --kubernetes-version v1.17.3 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16


Problem 3: 一个小尾巴,关闭版本探测

kubeadm init --kubernetes-version=v1.11.3

否则kubeadm会访问一个墙外的文件,找这个版本, 也会卡住。

然后就可以愉快的玩k8s了,真呀嘛真好用,不浪费这一番折腾。


pull images 后还init还不成功的原因 

程序会访问https://dl.k8s.io/release/stable-1.txt获取最新的k8s版本,访问这个连接需要FQ,如果无法访问,则会使用kubeadm client的版本作为安装的版本号,使用kubeadm version查看client版本。也可以使用--kubernetes-version明确指定版本

Docker国内镜像的配置及使用

images 准备

#不过如果用aliyun mirrors 应该也不用理这个的 初始化是会自己拉
root@k8s-master:~# kubeadm config images list
W0304 10:05:03.567343   26153 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0304 10:05:03.567442   26153 validation.go:28] Cannot validate kubelet config - no validator is available
k8s.gcr.io/kube-apiserver:v1.17.3
k8s.gcr.io/kube-controller-manager:v1.17.3
k8s.gcr.io/kube-scheduler:v1.17.3
k8s.gcr.io/kube-proxy:v1.17.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5

由上面的 list 得知 
好办,我们先找台海外服务器,把相应的镜像拉下来,推到我们自己的私有仓库里,再pull,然后改tag。没有私有仓库也不要紧,我已经把1.15.1推到hub.docker.com了。

找一台能连接k8s.gcr.io的服务器:

docker pull k8s.gcr.io/kube-apiserver:v1.17.3
docker pull  k8s.gcr.io/kube-controller-manager:v1.17.3
docker pull  k8s.gcr.io/kube-scheduler:v1.17.3
docker pull  k8s.gcr.io/kube-proxy:v1.17.3
docker pull  k8s.gcr.io/pause:3.1
docker pull  k8s.gcr.io/etcd:3.4.3-0
docker pull  k8s.gcr.io/coredns:1.6.5


docker login 
evan886  evan2240881

docker tag  k8s.gcr.io/kube-apiserver:v1.17.3    evan886/kube-apiserver:v1.17.3

docker push  evan886/kube-apiserver:v1.17.3


docker tag  k8s.gcr.io/kube-controller-manager:v1.17.3 evan886/kube-controller-manager:v1.17.3
docker tag  k8s.gcr.io/kube-scheduler:v1.17.3   evan886/kube-scheduler:v1.17.3  




docker tag  k8s.gcr.io/kube-proxy:v1.17.3 evan886/kube-proxy:v1.17.3
docker tag  k8s.gcr.io/pause:3.1 evan886/pause:3.1
docker tag  k8s.gcr.io/etcd:3.4.3-0  evan886/etcd:3.4.3-0
docker tag  k8s.gcr.io/coredns:1.6.5  evan886/coredns:1.6.5 

#push 自己的hub.docker 
docker push  evan886/kube-apiserver:v1.17.3
docker push  evan886/kube-controller-manager:v1.17.3  
docker push evan886/kube-scheduler:v1.17.3
docker push evan886/kube-proxy:v1.17.3
docker push evan886/pause:3.1
docker push  evan886/etcd:3.4.3-0
docker push evan886/coredns:1.6.5
   
   
   #on k8s master 
 docker pull evan886/etcd:3.4.3-0
 docker pull  evan886/coredns:1.6.5
 docker pull evan886/kube-proxy:v1.17.3
   
   
   再tag 回k8s.gcr.io 
  
docker tag   evan886/kube-proxy:v1.17.3  k8s.gcr.io/kube-proxy:v1.17.3
docker tag  evan886/etcd:3.4.3-0  k8s.gcr.io/etcd:3.4.3-0  
docker tag  evan886/coredns:1.6.5   k8s.gcr.io/coredns:1.6.5  



可优化的脚本 
运行脚本是这样的:

MY_REGISTRY=registry.cn-hangzhou.aliyuncs.com/google_containers
#registry.cn-hangzhou.aliyuncs.com/google-images
VERSION=v1.11.3

## 拉取镜像
docker pull ${MY_REGISTRY}/kube-apiserver-amd64:${VERSION}
docker pull ${MY_REGISTRY}/kube-controller-manager-amd64:${VERSION}
docker pull ${MY_REGISTRY}/kube-scheduler-amd64:${VERSION}
docker pull ${MY_REGISTRY}/kube-proxy-amd64:${VERSION}
docker pull ${MY_REGISTRY}/etcd-amd64:3.2.18
docker pull ${MY_REGISTRY}/pause-amd64:3.1
docker pull ${MY_REGISTRY}/coredns:1.1.3
docker pull ${MY_REGISTRY}/pause:3.1

## 添加Tag
docker tag ${MY_REGISTRY}/kube-apiserver-amd64:${VERSION} k8s.gcr.io/kube-apiserver-amd64:${VERSION}
docker tag ${MY_REGISTRY}/kube-scheduler-amd64:${VERSION} k8s.gcr.io/kube-scheduler-amd64:${VERSION}
docker tag ${MY_REGISTRY}/kube-controller-manager-amd64:${VERSION} k8s.gcr.io/kube-controller-manager-amd64:${VERSION}
docker tag ${MY_REGISTRY}/kube-proxy-amd64:${VERSION} k8s.gcr.io/kube-proxy-amd64:${VERSION}
docker tag ${MY_REGISTRY}/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18
docker tag ${MY_REGISTRY}/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
docker tag ${MY_REGISTRY}/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3
docker tag ${MY_REGISTRY}/pause:3.1 k8s.gcr.io/pause:3.1




        不同的版本需要特定version的image,如果长期跟踪kubeadm和kubectl,要注意维护这个image列表
        如果使用代理方案,注意 http_proxy=<proxy address>:<proxy port> docker pull 并不能生效,而是要让docker daemon感知到proxy的存在。这是一个坑点,但不是docker的设计缺陷,而是image pull的操作是docker服务进程管理的,当然代理要让这个进程使用。


   

info

cat >>/etc/hosts <<EOF
192.168.11.184  k8s-master
192.168.88.31  k8s-node1
192.168.88.32  k8s-node2
EOF


每台机器最少2GB内存,2CPUs。
集群中所有机器之间网络连接正常。
打开相应的端口,详见: [ Check required ports https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports]


Kubernetes要求集群中所有机器具有不同的Mac地址、产品uuid、Hostname。可以使用如下命令查看:

# UUID
 cat /sys/class/dmi/id/product_uuid

# Mac地址
 ip link

FQ

Ubuntu利用shadowsocks和polipo终端翻墙

pre

搞个离线的吧 可以先学习着

K8s镜像

使用kubeadm离线部署kubernetesv1.9.0

Set Hostname and update hosts file

sudo hostnamectl set-hostname "k8s-master"
sudo hostnamectl set-hostname k8s-node1
sudo hostnamectl set-hostname k8s-node2

#Add the following lines in /etc/hosts file on all three systems,

翻墙

Debian利用shadowsocks和polipo终端代理翻墙


cat /etc/profile #最好就是这个写成 Privoxy 先安装在kali就行了 (polipo可能放弃 )那台机器的IP就行了,其它机器就不用再搞 ss  polipo了
#这 这里填写  polipo那台机器的ip 
export http_proxy="http://PrivoxyIP:8118/"
export https_proxy=$http_proxy
#export no_proxy="localhost,127.0.0.1,192.168.88.58,10.96.0.0,10.224.0.0"
 export no_proxy="localhost,127.0.0.1,192.168.88.58,10.96.0.0,10.224.0.0,10.224.*"
 

如果不想翻墙 请参考使用Kubeadm搭建Kubernetes(1.12.2)集群

ins docker

#docker-compose 直接用官方的二进制包 

debian9 or 10

.Install using the repository on debian

#Aug 17 2021 

sudo apt-get install     apt-transport-https     ca-certificates     curl     gnupg     lsb-release

  curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
 echo   "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
  sudo apt-get install docker-ce docker-ce-cli containerd.io


sudo apt-get install docker-ce docker-ce-cli containerd.io


如果是kali 2020 
cat /etc/apt/sources.list.d/docker.list 
deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian   buster  stable
#deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian   kali-rolling stable



#old 
apt  install software-properties-common

apt-get remove docker docker-engine docker.io containerd runc

sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg2 \
    software-properties-common -y 

 curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -

 sudo apt-key fingerprint 0EBFCD88
 sudo add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/debian \
   $(lsb_release -cs) \
   stable"
 apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

https://docs.docker.com/engine/install/debian/ https://docs.docker.com/install/linux/docker-ce/debian/

2.install-from-a-package on debian

Go to https://download.docker.com/linux/debian/dists/, choose your Debian version, browse to pool/stable/, choose either amd64 or armhf, and download the .deb file for the Docker CE version you want to install.

I am stretch so

apt install libltdl7

http://mirrors.aliyun.com/docker-ce/linux/debian/dists/stretch/pool/stable/amd64/


Docker入门

docker代理设置


#不要少了开头的service 还要记得check一个代理成功不  昨天就是 另外两台机器 也写了 127.0.0.1 丢人
mkdir -p /etc/systemd/system/docker.service.d
vi /etc/systemd/system/docker.service.d/http-proxy.conf

[Service]
Environment="HTTPS_PROXY=http://192.168.10.158:8118/" "HTTP_PROXY=http://127.0.0.1:8118/" "NO_PROXY=localhost,127.0.0.1,192.168.88.67,10.96.0.0,10.224.0.0"
#Environment="HTTPS_PROXY=http://127.0.0.1:8123/" "HTTP_PROXY=http://127.0.0.1:8123/" "NO_PROXY=localhost,127.0.0.1,192.168.88.67,10.96.0.0,10.224.0.0"

#Environment="HTTP_PROXY=http://proxy.example.com:80/" "HTTPS_PROXY=http://proxy.example.com:80/""NO_PROXY=localhost,127.0.0.1,docker-registry.somecorporation.com"

systemctl daemon-reload
systemctl restart docker 
systemctl enable docker

systemctl show --property=Environment docker


other
evan@k8s-master:~$ sudo systemctl enable docker 
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

docker http-proxy

docker 代理设置

docker代理配置-透过代理服务器pull镜像

docker pull 翻墙下载镜像

docker设置代理


docker - 设置HTTP/HTTPS 代理

ins 在所有节点上


swapoff -a;  sudo usermod -a -G docker $USER

sudo apt update && sudo apt install apt-transport-https ca-certificates curl software-properties-common -y

修改docker cgroup driver为systemd

根据文档CRI installation中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上docker的cgroup driver为systemd。

创建或修改/etc/docker/daemon.json

{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

重启docker:

systemctl restart docker

docker info | grep Cgroup
Cgroup Driver: systemd


#国内版
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.huaweicloud.com/kubernetes/apt/ kubernetes-xenial main
EOF



apt  install gnupg  -y  
 curl -s https://mirrors.huaweicloud.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
#4、更新索引文件并安装kubernetes
sudo apt update
sudo apt install -y kubeadm kubelet kubectl 







apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl


#init  之前不要启动
#systemctl start kubelet&&  systemctl enable kubelet.service


启动不了
原来是kubelet 的cgroup dirver 与 docker的不一样。docker默认使用cgroupfs,keubelet 默认使用systemd。

https://kubernetes.io/docs/setup/cri/

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

systemctl daemon-reload
systemctl restart docker







#这个有改的 18.04上成功了的
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 
[Service]
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

systemctl daemon-reload && systemctl restart kubelet &&  systemctl enable kubelet.service

在 Master 节点上配置 kubelet 所需的 cgroup 驱动

使用 Docker 时,kubeadm 会自动为其检测 cgroup 驱动在运行时对 /var/lib/kubelet/kubeadm-flags.env 文件进行配置。 
如果您使用了不同的 CRI, 您得把 /etc/default/kubelet 文件中的 cgroup-driver 位置改为对应的值,像这样:

KUBELET_EXTRA_ARGS=--cgroup-driver=<value>

这个文件将会被 kubeadm init 和 kubeadm join 用于为 kubelet 获取 额外的用户参数。

请注意,您只需要在您的 cgroup driver 不是 cgroupfs 时这么做,因为 cgroupfs 已经是 kubelet 的默认值了。

systemctl daemon-reload; systemctl restart kubelet #需要重启 kubelet:


/etc/systemd/system/kubelet.service.d/10-kubeadm.conf


This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
#me  2020
evan@k8s-master:~$ cat /var/lib/kubelet/kubeadm-flags.env 
KUBELET_KUBEADM_ARGS=--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --resolv-conf=/run/systemd/resolve/resolv.conf

初始化master


#可以用国内阿里节点 不用FQ了 
kubeadm init --apiserver-advertise-address=192.168.11.184 --image-repository registry.aliyuncs.com/google_containers --ignore-preflight-errors=all  --kubernetes-version v1.17.1 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16







 #14:25:52--14:47:55 kubelet 其实是没启动的 在init之前 
 kubeadm init   --apiserver-advertise-address=192.168.88.30  --pod-network-cidr=10.224.0.0/16 # --apiserver-advertise-address=masterip

kubeadm join 192.168.88.58:6443 --token fuwhe0.ro0c8u82u4xtmn8q \
    --discovery-token-ca-cert-hash sha256:83bd9c19486c44fde674f4ccf0a7382848cd7bfeff8c361d54e7a2955a4dbd60





Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf



另外有一个小技巧,在init的过程中,另开一个终端,运行

journalctl -f -u kubelet.service

可以查看具体是什么愿意卡住了


 

配置kubectl认证信息

cat  /etc/sudoers.d/evan
echo 'evan ALL=(ALL) NOPASSWD:NOPASSWD:ALL' > /etc/sudoers.d/evan

su - evan 
mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo "source <(kubectl completion bash)" >> ~/.bashrc
exit 

# 对于root用户 这省不能少 不然  #  kubectl  apply -f kube-flannel.yml  The connection to the server localhost:8080 was refused - did you specify the right host or port?

export KUBECONFIG=/etc/kubernetes/admin.conf
#也可以直接放到~/.bash_profile
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

安装pod网络on master

#普通用户 不要翻墙
 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 

添加节点

不要翻墙了 新起个窗口

  # on  all node 
kubeadm join 192.168.88.58:6443 --token fuwhe0.ro0c8u82u4xtmn8q \
    --discovery-token-ca-cert-hash sha256:83bd9c19486c44fde674f4ccf0a7382848cd7bfeff8c361d54e7a2955a4dbd60 



evan@k8s-master:~$ kubectl get nodes
NAME   STATUS     ROLES    AGE     VERSION
k8s    NotReady   master   5h12m   v1.14.2
u16    NotReady   <none>   106m    v1.14.2

evan@k8s-master:~$ kubectl get pod --all-namespaces
NAMESPACE     NAME                          READY   STATUS              RESTARTS   AGE
kube-system   coredns-fb8b8dccf-nprqq       0/1     Terminating         16         5h11m
kube-system   coredns-fb8b8dccf-qn85f       0/1     Pending             0          5m4s
kube-system   coredns-fb8b8dccf-sgtw4       0/1     Terminating         16         5h11m
kube-system   coredns-fb8b8dccf-wsnkg       0/1     Pending             0          5m5s
kube-system   etcd-k8s                      1/1     Running             0          5h11m
kube-system   kube-apiserver-k8s            1/1     Running             0          5h11m
kube-system   kube-controller-manager-k8s   1/1     Running             0          5h11m
kube-system   kube-flannel-ds-amd64-8vvn6   0/1     Init:0/1            0          107m
kube-system   kube-flannel-ds-amd64-q92vz   1/1     Running             0          112m
kube-system   kube-proxy-85vkt              0/1     ContainerCreating   0          107m
kube-system   kube-proxy-fr7lv              1/1     Running             0          5h11m
kube-system   kube-scheduler-k8s            1/1     Running             0          5h11m


evan@k8s-master:~$ kubectl describe pod  kube-proxy-85vkt  --namespace=kube-system
Name:               kube-proxy-85vkt
Namespace:          kube-system
Priority:           2000001000
PriorityClassName:  system-node-critical
Node:               u16/192.168.88.66
****

Events:
  Type     Reason                  Age                   From               Message
  ----     ------                  ----                  ----               -------
  Normal   Scheduled               109m                  default-scheduler  Successfully assigned kube-system/kube-proxy-85vkt to u16
  Normal   Pulling                 108m                  kubelet, u16       Pulling image "k8s.gcr.io/kube-proxy:v1.14.2"
  Normal   Pulled                  107m                  kubelet, u16       Successfully pulled image "k8s.gcr.io/kube-proxy:v1.14.2"
  Normal   Created                 107m                  kubelet, u16       Created container kube-proxy
  Normal   Started                 107m                  kubelet, u16       Started container kube-proxy
  Warning  FailedCreatePodSandBox  52m (x119 over 107m)  kubelet, u16       Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

放了一个晚上 早上还是坏的 突然打开已是好的了

evan@ubuntu18:~$ kubectl get pod --all-namespaces
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-2rbwc            1/1     Running   3          18h
kube-system   coredns-fb8b8dccf-67zc2            1/1     Running   3          18h
kube-system   etcd-ubuntu18                      1/1     Running   10         18h
kube-system   kube-apiserver-ubuntu18            1/1     Running   4          18h
kube-system   kube-controller-manager-ubuntu18   1/1     Running   5          18h
kube-system   kube-flannel-ds-amd64-b6bn8        1/1     Running   45         16h
kube-system   kube-flannel-ds-amd64-v9wxm        1/1     Running   46         16h
kube-system   kube-flannel-ds-amd64-zn4xd        1/1     Running   3          16h
kube-system   kube-proxy-d7pmb                   1/1     Running   4          18h
kube-system   kube-proxy-gcddr                   1/1     Running   0          16h
kube-system   kube-proxy-lv8cb                   1/1     Running   0          16h
kube-system   kube-scheduler-ubuntu18            1/1     Running   5          18h



master 也当作node  这里的master hostname 为	ubuntu18OB
evan@ubuntu18:~$ kubectl  taint node ubuntu18 node-role.kubernetes.io/master-
node/ubuntu18 untainted

#master only
kubectl  taint node ubuntu18 node-role.kubernetes.io/master="":NoSchedule

master 也当作node

 [root@master tomcat]# hostname
master
[root@master tomcat]# kubectl taint node master node-role.kubernetes.io/master-
node/master untainted 


下面的是不是可以不要翻墙了呢

chpater4 k8s architecture

#唯一不是容器形式运行的k8s 组件
evan@k8s-master:~$ sudo systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Mon 2019-05-27 07:26:18 UTC; 21min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 817 (kubelet)
    Tasks: 19 (limit: 3499)
   CGroup: /system.slice/kubelet.service
           └─817 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf -



在master节点上发起个创建应用请求 
这里我们创建个名为httpd-app的应用,镜像为httpd,有两个副本pod
evan@k8s-master:~$ kubectl run httpd-app --image=httpd --replicas=2
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/httpd-app created

evan@k8s-master:~$ kubectl get deployment
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
httpd-app   0/2     2            0           103s

evan@k8s-master:~$ kubectl get pods -o wide
NAME                         READY   STATUS              RESTARTS   AGE     IP       NODE        NOMINATED NODE   READINESS GATES
httpd-app-6df58645c6-bvg9w   0/1     ContainerCreating   0          2m10s   <none>   k8s-node1   <none>           <none>
httpd-app-6df58645c6-n9xdj   0/1     ContainerCreating   0          2m10s   <none>   k8s-node2   <none>           <none>

evan@k8s-master:~$ kubectl get pods -o wide
NAME                         READY   STATUS              RESTARTS   AGE     IP           NODE        NOMINATED NODE   READINESS GATES
httpd-app-6df58645c6-bvg9w   0/1     ContainerCreating   0          3m58s   <none>       k8s-node1   <none>           <none>
httpd-app-6df58645c6-n9xdj   1/1     Running             0          3m58s   10.224.1.2   k8s-node2   <none>           <none>
#OK了
evan@k8s-master:~$ kubectl get pods -o wide
NAME                         READY   STATUS    RESTARTS   AGE    IP           NODE        NOMINATED NODE   READINESS GATES
httpd-app-6df58645c6-bvg9w   1/1     Running   0          6m8s   10.224.2.3   k8s-node1   <none>           <none>
httpd-app-6df58645c6-n9xdj   1/1     Running   0          6m8s   10.224.1.2   k8s-node2   <none>           <none>

下面 关闭ss docker 代理 polipo

chapter 5 run apps

evan@k8s-master:~$ kubectl run nginx-deployment --image=nginx:1.7.9 --replicas=2
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx-deployment created

上面的命令将部署包含两个副本的 Deployment nginx-deployment,容器的 image 为 nginx:1.7.9。

等待一段时间
kubectl get deployment nginx-deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   2/2     2            2           36m


接下来我们用 kubectl describe deployment 了解更详细的信息

等待

sudo  sslocal -c /root/shadowsocks.json -d start
 sslocal -c shadowsocks.json -d start
sslocal -c shadowsocks.json -d start

进阶

K8S 源码探秘 之 kubeadm init 执行流程分析

kubeadm--init

安装k8s Master高可用集群

What is new

在Kubernetes 1.11中,CoreDNS已经实现了基于DNS的服务发现的GA,可作为kube-dns插件的替代品。这意味着CoreDNS将作为各种安装工具未来发布版本中的一个选项来提供。 事实上,kubeadm团队选择将其作为Kubernetes 1.11的默认选项。

CoreDNS正式GA | kube-dns与CoreDNS有何差异?

k8s集群配置使用coredns代替kube-dns

trouble

2020


换个国内的源好了 


[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
	- 'docker ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubern
To see the stack trace of this error execute with --v=5 or higher

Kubenetes服务不启动问题

重启系统后,发现kubelet服务没有起来,首先检查:

 1.vim  /etc/fstab
#注释掉里面的swap一行。

2
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf 文件加入KUBELET_CGROUP_ARGS和KUBELET_EXTRA_ARGS参数,


3.注意在启动参数中也要加入,如下:
[Service]

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"

ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_CGROUP_ARGS $KUBELET_EXTRA_ARGS

systemctl daemon-reload
systemctl restart kubelet

trouble2 重启一下机器就坏


这个如果用国内源 要指定init的版本 安装时后面有00 软件名使用时没有 00
为什么重启一下机器就坏了呢

systemctl  status  kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Fri 2019-05-24 20:27:22 CST; 1s ago
     Docs: https://kubernetes.io/docs/home/
  Process: 1889 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (cod
 Main PID: 1889 (code=exited, status=255)



kubelet.service: Main process exited, code=exited, status=255


journalctl -xefu kubelet

原来是kubelet 的cgroup dirver 与 docker的不一样。docker默认使用cgroupfs,keubelet 默认使用systemd。


简单地说就是在kubeadm init 之前kubelet会不断重启。


[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'


在集群初始化遇到问题,可以使用下面的命令进行清理后重新再初始化:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/



K8S 初始化问题,有哪位遇到过,求解!timed out waiting for the condition

trouble3

evan@k8s-master:~$ docker pull gcr.io/kubernetes-helm/tiller:v2.14.0
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/create?fromImage=gcr.io%2Fkubernetes-helm%2Ftiller&tag=v2.14.0: dial unix /var/run/docker.sock: connect: permission denied


    sudo usermod -a -G docker $USER #普通用户添加天docker 组

Docker pull Get Permission Denied

trouble 3

docker  223.6.6.6 有时有问题 建议用8.8.4.4

see also

在国内使用阿里云镜像源搭建Kubernetes环境

ubuntu 离线搭建Kubenetes1.9.2 集群

使用Kubeadm搭建Kubernetes(1.12.2)集群


Debian 9 使用kubeadm创建 k8s 集群(上)


Debian 9 使用kubeadm创建 k8s 集群(下)


Install and Configure Kubernetes (k8s) 1.13 on Ubuntu 18.04 LTS / Ubuntu 18.10

Ubuntu 18.04 离线安装Kubernetes v1.11.1

安装部署 Kubernetes 集群

https://www.kubernetes.org.cn/course/install


Install and Configure Kubernetes (k8s) on ubuntu

kubernetes部署(kubeadm国内镜像源)

Debian 10中部署Kubernetes

docker/kubernetes国内源/镜像源解决方式 k8s常见报错解决--持续更新

如何在国内安装K8S

Kubernetes动手系列:手把手教你10分钟快速部署集群