“Urxvt”与“Use aliyun mirrors Install and Configure Kubernetes (k8s) on debian10”:页面之间的差异

来自linuxsa wiki
(页面间差异)
跳转到导航 跳转到搜索
Evan留言 | 贡献
 
Evan留言 | 贡献
 
第1行: 第1行:


建议使用2021年的
[[使用阿里云镜像源快速搭建kubernetes(k8s) on debian10]]


参考一下 然后写成脚本吧
[https://blog.csdn.net/shykevin/article/details/98811021 ubuntu 使用阿里云镜像源快速搭建kubernetes 1.15.2集群]


=Rxvt-unicode=
初始化时 指定aliyun  mirrors  本来是指定 1。17。1版本的 我改了新的
Rxvt-unicode 一般说的就是他了
    kubeadm init --apiserver-advertise-address=192.168.11.184 --image-repository registry.aliyuncs.com/google_containers --ignore-preflight-errors=all  --kubernetes-version v1.17.3 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16


==usage==
=images 准备 =
<pre>
<pre>
cp    ctrl+alt +c 
#不过如果用aliyun mirrors 应该也不用理这个的 初始化是会自己拉
y  ctrl+alt + v
root@k8s-master:~# kubeadm config images list
W0304 10:05:03.567343  26153 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0304 10:05:03.567442  26153 validation.go:28] Cannot validate kubelet config - no validator is available
k8s.gcr.io/kube-apiserver:v1.17.3
k8s.gcr.io/kube-controller-manager:v1.17.3
k8s.gcr.io/kube-scheduler:v1.17.3
k8s.gcr.io/kube-proxy:v1.17.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5


由上面的 list 得知
好办,我们先找台海外服务器,把相应的镜像拉下来,推到我们自己的私有仓库里,再pull,然后改tag。没有私有仓库也不要紧,我已经把1.15.1推到hub.docker.com了。


改成和平时的一致
找一台能连接k8s.gcr.io的服务器:
In your .Xresources add:


URxvt.keysym.Shift-Control-V: eval:paste_clipboard
docker pull k8s.gcr.io/kube-apiserver:v1.17.3
URxvt.keysym.Shift-Control-C: eval:selection_to_clipboard
docker pull  k8s.gcr.io/kube-controller-manager:v1.17.3
docker pull  k8s.gcr.io/kube-scheduler:v1.17.3
docker pull  k8s.gcr.io/kube-proxy:v1.17.3
docker pull  k8s.gcr.io/pause:3.1
docker pull  k8s.gcr.io/etcd:3.4.3-0
docker pull  k8s.gcr.io/coredns:1.6.5


docker login
evan886  evan2240881
docker tag  k8s.gcr.io/kube-apiserver:v1.17.3    evan886/kube-apiserver:v1.17.3
docker push  evan886/kube-apiserver:v1.17.3
docker tag  k8s.gcr.io/kube-controller-manager:v1.17.3 evan886/kube-controller-manager:v1.17.3
docker tag  k8s.gcr.io/kube-scheduler:v1.17.3  evan886/kube-scheduler:v1.17.3 
docker tag  k8s.gcr.io/kube-proxy:v1.17.3 evan886/kube-proxy:v1.17.3
docker tag  k8s.gcr.io/pause:3.1 evan886/pause:3.1
docker tag  k8s.gcr.io/etcd:3.4.3-0  evan886/etcd:3.4.3-0
docker tag  k8s.gcr.io/coredns:1.6.5  evan886/coredns:1.6.5
#push 自己的hub.docker
docker push  evan886/kube-apiserver:v1.17.3
docker push  evan886/kube-controller-manager:v1.17.3 
docker push evan886/kube-scheduler:v1.17.3
docker push evan886/kube-proxy:v1.17.3
docker push evan886/pause:3.1
docker push  evan886/etcd:3.4.3-0
docker push evan886/coredns:1.6.5
 
 
  #on k8s master
docker pull evan886/etcd:3.4.3-0
docker pull  evan886/coredns:1.6.5
docker pull evan886/kube-proxy:v1.17.3
 
 
  再tag 回k8s.gcr.io
 
docker tag  evan886/kube-proxy:v1.17.3  k8s.gcr.io/kube-proxy:v1.17.3
docker tag  evan886/etcd:3.4.3-0  k8s.gcr.io/etcd:3.4.3-0 
docker tag  evan886/coredns:1.6.5  k8s.gcr.io/coredns:1.6.5 
 
</pre>
=info=
<pre>
cat >>/etc/hosts <<EOF
192.168.11.184  k8s-master
192.168.88.31  k8s-node1
192.168.88.32  k8s-node2
EOF
每台机器最少2GB内存,2CPUs。
集群中所有机器之间网络连接正常。
打开相应的端口,详见: [ Check required ports https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports]
Kubernetes要求集群中所有机器具有不同的Mac地址、产品uuid、Hostname。可以使用如下命令查看:
# UUID
cat /sys/class/dmi/id/product_uuid
# Mac地址
ip link
</pre>
=pre=
搞个离线的吧 可以先学习着
[[K8s镜像]]
[[使用kubeadm离线部署kubernetesv1.9.0]]
=Set Hostname and update hosts file=
<pre>
sudo hostnamectl set-hostname "k8s-master"
sudo hostnamectl set-hostname k8s-node1
sudo hostnamectl set-hostname k8s-node2
#Add the following lines in /etc/hosts file on all three systems,
</pre>
如果不想翻墙 请参考[https://www.cnblogs.com/RainingNight/p/using-kubeadm-to-create-a-cluster-1-12.html 使用Kubeadm搭建Kubernetes(1.12.2)集群]
=ins docker=
[[Docker and docker-compose快速安装#on_debian]]
docker-compose 如果要直接用官方的二进制包
[[Docker入门]]
=ins 在所有节点上=
<pre>
sudo swapoff -a;  sudo usermod -a -G docker $USER
sudo apt update && sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
修改docker cgroup driver为systemd
根据文档CRI installation中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上docker的cgroup driver为systemd。
创建或修改/etc/docker/daemon.json
#直接加在最前面
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
重启docker:
systemctl restart docker
docker info | grep Cgroup
Cgroup Driver: systemd
cat  /etc/docker/daemon.json
{
    "registry-mirrors": [
        "https://1nj0zren.mirror.aliyuncs.com",
        "https://docker.mirrors.ustc.edu.cn",
        "http://f1361db2.m.daocloud.io",
        "https://registry.docker-cn.com"
    ]
}
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
不然  troubles
[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ failed with error
#国内版
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.huaweicloud.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt  install gnupg  -y 
curl -s https://mirrors.huaweicloud.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
#4、更新索引文件并安装kubernetes
sudo apt update
sudo apt install -y kubeadm kubelet kubectl
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
#init  之前不要启动
#systemctl start kubelet&&  systemctl enable kubelet.service
这些不改也无伤的 如果不是线上环境 2020
启动不了
原来是kubelet 的cgroup dirver 与 docker的不一样。docker默认使用cgroupfs,keubelet 默认使用systemd。
https://kubernetes.io/docs/setup/cri/
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload
systemctl restart docker
#这个有改的 18.04上成功了的
#vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
systemctl daemon-reload && systemctl restart kubelet &&  systemctl enable kubelet.service
</pre>
=不是线上的可以不改,在Master节点上配置 kubelet所需的cgroup驱动=
<pre>
使用 Docker 时,kubeadm 会自动为其检测 cgroup 驱动在运行时对 /var/lib/kubelet/kubeadm-flags.env 文件进行配置。
如果您使用了不同的 CRI, 您得把 /etc/default/kubelet 文件中的 cgroup-driver 位置改为对应的值,像这样:
KUBELET_EXTRA_ARGS=--cgroup-driver=<value>
这个文件将会被 kubeadm init 和 kubeadm join 用于为 kubelet 获取 额外的用户参数。
请注意,您只需要在您的 cgroup driver 不是 cgroupfs 时这么做,因为 cgroupfs 已经是 kubelet 的默认值了。
systemctl daemon-reload; systemctl restart kubelet #需要重启 kubelet:
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
#me  2020
evan@k8s-master:~$ cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS=--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --resolv-conf=/run/systemd/resolve/resolv.conf
</pre>
=初始化master=
<pre>
#可以用国内阿里节点 不用FQ了
kubeadm init --apiserver-advertise-address=192.168.11.184 --image-repository registry.aliyuncs.com/google_containers --ignore-preflight-errors=all  --kubernetes-version v1.17.1 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16
#14:25:52--14:47:55 kubelet 其实是没启动的 在init之前
kubeadm init  --apiserver-advertise-address=192.168.88.30  --pod-network-cidr=10.224.0.0/16 # --apiserver-advertise-address=masterip
kubeadm join 192.168.88.58:6443 --token fuwhe0.ro0c8u82u4xtmn8q \
    --discovery-token-ca-cert-hash sha256:83bd9c19486c44fde674f4ccf0a7382848cd7bfeff8c361d54e7a2955a4dbd60
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
另外有一个小技巧,在init的过程中,另开一个终端,运行
journalctl -f -u kubelet.service
可以查看具体是什么愿意卡住了
</pre>
=配置kubectl认证信息=
<pre>
cat  /etc/sudoers.d/evan
echo 'evan ALL=(ALL) NOPASSWD:NOPASSWD:ALL' > /etc/sudoers.d/evan
su - evan
mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo "source <(kubectl completion bash)" >> ~/.bashrc
exit
# 对于root用户 这省不能少 不然  #  kubectl  apply -f kube-flannel.yml  The connection to the server localhost:8080 was refused - did you specify the right host or port?
export KUBECONFIG=/etc/kubernetes/admin.conf
#也可以直接放到~/.bash_profile
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile</pre>
=安装pod网络on master=
<pre>#普通用户 不要翻墙
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml </pre>
=添加节点=
不要翻墙了 新起个窗口
<pre>  # on  all node
kubeadm join 192.168.88.58:6443 --token fuwhe0.ro0c8u82u4xtmn8q \
    --discovery-token-ca-cert-hash sha256:83bd9c19486c44fde674f4ccf0a7382848cd7bfeff8c361d54e7a2955a4dbd60
evan@k8s-master:~$ kubectl get nodes
NAME  STATUS    ROLES    AGE    VERSION
k8s    NotReady  master  5h12m  v1.14.2
u16    NotReady  <none>  106m    v1.14.2
evan@k8s-master:~$ kubectl get pod --all-namespaces
NAMESPACE    NAME                          READY  STATUS              RESTARTS  AGE
kube-system  coredns-fb8b8dccf-nprqq      0/1    Terminating        16        5h11m
kube-system  coredns-fb8b8dccf-qn85f      0/1    Pending            0          5m4s
kube-system  coredns-fb8b8dccf-sgtw4      0/1    Terminating        16        5h11m
kube-system  coredns-fb8b8dccf-wsnkg      0/1    Pending            0          5m5s
kube-system  etcd-k8s                      1/1    Running            0          5h11m
kube-system  kube-apiserver-k8s            1/1    Running            0          5h11m
kube-system  kube-controller-manager-k8s  1/1    Running            0          5h11m
kube-system  kube-flannel-ds-amd64-8vvn6  0/1    Init:0/1            0          107m
kube-system  kube-flannel-ds-amd64-q92vz  1/1    Running            0          112m
kube-system  kube-proxy-85vkt              0/1    ContainerCreating  0          107m
kube-system  kube-proxy-fr7lv              1/1    Running            0          5h11m
kube-system  kube-scheduler-k8s            1/1    Running            0          5h11m
evan@k8s-master:~$ kubectl describe pod  kube-proxy-85vkt  --namespace=kube-system
Name:              kube-proxy-85vkt
Namespace:          kube-system
Priority:          2000001000
PriorityClassName:  system-node-critical
Node:              u16/192.168.88.66
****
Events:
  Type    Reason                  Age                  From              Message
  ----    ------                  ----                  ----              -------
  Normal  Scheduled              109m                  default-scheduler  Successfully assigned kube-system/kube-proxy-85vkt to u16
  Normal  Pulling                108m                  kubelet, u16      Pulling image "k8s.gcr.io/kube-proxy:v1.14.2"
  Normal  Pulled                  107m                  kubelet, u16      Successfully pulled image "k8s.gcr.io/kube-proxy:v1.14.2"
  Normal  Created                107m                  kubelet, u16      Created container kube-proxy
  Normal  Started                107m                  kubelet, u16      Started container kube-proxy
  Warning  FailedCreatePodSandBox  52m (x119 over 107m)  kubelet, u16      Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
放了一个晚上 早上还是坏的 突然打开已是好的了
evan@ubuntu18:~$ kubectl get pod --all-namespaces
NAMESPACE    NAME                              READY  STATUS    RESTARTS  AGE
kube-system  coredns-fb8b8dccf-2rbwc            1/1    Running  3          18h
kube-system  coredns-fb8b8dccf-67zc2            1/1    Running  3          18h
kube-system  etcd-ubuntu18                      1/1    Running  10        18h
kube-system  kube-apiserver-ubuntu18            1/1    Running  4          18h
kube-system  kube-controller-manager-ubuntu18  1/1    Running  5          18h
kube-system  kube-flannel-ds-amd64-b6bn8        1/1    Running  45        16h
kube-system  kube-flannel-ds-amd64-v9wxm        1/1    Running  46        16h
kube-system  kube-flannel-ds-amd64-zn4xd        1/1    Running  3          16h
kube-system  kube-proxy-d7pmb                  1/1    Running  4          18h
kube-system  kube-proxy-gcddr                  1/1    Running  0          16h
kube-system  kube-proxy-lv8cb                  1/1    Running  0          16h
kube-system  kube-scheduler-ubuntu18            1/1    Running  5          18h
master 也当作node  这里的master hostname 为 ubuntu18OB
evan@ubuntu18:~$ kubectl  taint node ubuntu18 node-role.kubernetes.io/master-
node/ubuntu18 untainted
#master only
kubectl  taint node ubuntu18 node-role.kubernetes.io/master="":NoSchedule


</pre>
</pre>


==config==
=master 也当作node =
<pre>
<pre>
!!$HOME/.Xdefaults
[root@master tomcat]# hostname
!!我在用的配置文件 不用tab 直接多开就是了2019年 9月 7日 星期六 17时23分
master
URxvt.preeditType:Root
[root@master tomcat]# kubectl taint node master node-role.kubernetes.io/master-
!!调整此处设置输入法
node/master untainted </pre>
URxvt.inputMethod:fcitx


!!中括号内数表示透明度
URxvt.background:[200]#000000
URxvt.foreground:#ffffff
URxvt.colorBD:Gray95
URxvt.colorUL:Green
URxvt.color1:Red2
URxvt.color4:RoyalBlue
URxvt.color5:Magenta2
URxvt.color8:Gray50
URxvt.color10:Green2
URxvt.color12:DodgerBlue


!!颜色设置
=下面的是不是可以不要翻墙了呢=
URxvt.depth:32
URxvt.inheritPixmap:true
!!URxvt.background:#000000
URxvt.foreground:#ffffff
URxvt.colorBD:Gray95
URxvt.colorUL:Green
URxvt.color1:Red2
URxvt.color4:RoyalBlue
URxvt.color5:Magenta2
URxvt.color8:Gray50
URxvt.color10:Green2
URxvt.color12:DodgerBlue
URxvt.color14:Cyan2
URxvt.color15:Gray95
!!URL操作
URxvt.urlLauncher:chromium
URxvt.matcher.button:1
Urxvt.perl-ext-common:matcher
!!滚动条设置
URxvt.scrollBar:False
URxvt.scrollBar_floating:False
URxvt.scrollstyle:plain
!!滚屏设置
URxvt.mouseWheelScrollPage:True
URxvt.scrollTtyOutput:False
URxvt.scrollWithBuffer:True
URxvt.scrollTtyKeypress:True
!!光标闪烁
URxvt.cursorBlink:True
URxvt.saveLines:3000
!!边框
URxvt.borderLess:False
!!字体设置
Xft.dpi:86
!!Xft.dpi:96
!!URxvt.font:xft:Source Code Pro:antialias=True:pixelsize=13,xft:WenQuanYi Zen Hei:pixelsize=20
!!URxvt.boldfont:xft:Source Code Pro:antialias=True:pixelsize=13,xft:WenQuanYi Zen Hei Mono:style=Regular
!!pixelsize=20


!! font set
=chpater4  k8s architecture=
URxvt.font:xft:Bitstream Vera Sans Mono-13,xft:Microsoft Yahei:pixelsize=13
<pre>
URxvt.font:xft:Bitstream Vera Sans Mono-13,xft: Yahei:pixelsize=13
#唯一不是容器形式运行的k8s 组件
!!URxvt.boldFont:xft:Bitstream Vera Sans Mono-12:Bold,xft:Microsoft Yahei:pixelsize=13:Bold
evan@k8s-master:~$ sudo systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
  Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
          └─10-kubeadm.conf
  Active: active (running) since Mon 2019-05-27 07:26:18 UTC; 21min ago
    Docs: https://kubernetes.io/docs/home/
Main PID: 817 (kubelet)
    Tasks: 19 (limit: 3499)
  CGroup: /system.slice/kubelet.service
          └─817 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf -


!!urxvt -fn "xft:Ubuntu Mono:pixelsize=16,style=regular"


!!wqy-zenhei.ttc: 文泉驿等宽正黑,文泉驛等寬正黑,WenQuanYi Zen Hei Mono:style=Regular
 
!!xset b off
在master节点上发起个创建应用请求
这里我们创建个名为httpd-app的应用,镜像为httpd,有两个副本pod
evan@k8s-master:~$ kubectl run httpd-app --image=httpd --replicas=2
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/httpd-app created
 
evan@k8s-master:~$ kubectl get deployment
NAME        READY  UP-TO-DATE  AVAILABLE  AGE
httpd-app  0/2    2            0          103s
 
evan@k8s-master:~$ kubectl get pods -o wide
NAME                        READY  STATUS              RESTARTS  AGE    IP      NODE        NOMINATED NODE  READINESS GATES
httpd-app-6df58645c6-bvg9w  0/1    ContainerCreating  0          2m10s  <none>  k8s-node1  <none>          <none>
httpd-app-6df58645c6-n9xdj  0/1    ContainerCreating  0          2m10s  <none>  k8s-node2  <none>          <none>
 
evan@k8s-master:~$ kubectl get pods -o wide
NAME                        READY  STATUS              RESTARTS  AGE    IP          NODE        NOMINATED NODE  READINESS GATES
httpd-app-6df58645c6-bvg9w  0/1    ContainerCreating  0          3m58s  <none>      k8s-node1  <none>          <none>
httpd-app-6df58645c6-n9xdj  1/1    Running            0          3m58s  10.224.1.2  k8s-node2  <none>          <none>
#OK了
evan@k8s-master:~$ kubectl get pods -o wide
NAME                        READY  STATUS    RESTARTS  AGE    IP          NODE        NOMINATED NODE  READINESS GATES
httpd-app-6df58645c6-bvg9w  1/1    Running  0          6m8s  10.224.2.3  k8s-node1  <none>          <none>
httpd-app-6df58645c6-n9xdj  1/1    Running  0          6m8s  10.224.1.2  k8s-node2  <none>          <none>


</pre>
</pre>


==这个不完全透明 更加好用==
=chapter 5 run apps=
<pre>
<pre>
!! reload conf file  命令: xrdb ~/.Xresources  记得要 reload 才生效
evan@k8s-master:~$ kubectl run nginx-deployment --image=nginx:1.7.9 --replicas=2
!!URxvt.perl-ext-common: ...,clipboard,...
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
!!URxvt.clipboard.autocopy: true
deployment.apps/nginx-deployment created
!!URxvt.keysym.M-c: perl:clipboard:copy
!!URxvt.keysym.M-v: perl:clipboard:paste


Xcursor.theme: Windows-10-Icons
上面的命令将部署包含两个副本的 Deployment nginx-deployment,容器的 image 为 nginx:1.7.9。


!! font
等待一段时间
Rxvt.font:              xft:Monaco:size=16:antialias=false
kubectl get deployment nginx-deployment
NAME              READY  UP-TO-DATE  AVAILABLE  AGE
nginx-deployment  2/2    2            2          36m


Rxvt.boldFont:        xft:Monaco:size=16:Bold:antialias=false


Rxvt.italicFont:    xft:Monaco:size=16:Italic:antialias=false
接下来我们用 kubectl describe deployment 了解更详细的信息


Rxvt.boldItalicFont: xft:Monaco:size=16:Bold Italic:antialias=false
</pre>


=进阶=


! URxvt*termName:                       string
[https://blog.csdn.net/shida_csdn/article/details/83176735 K8S 源码探秘 之 kubeadm init 执行流程分析]
! URxvt*geometry:                      geometry
 
! URxvt*chdir:                          string
[https://blog.csdn.net/m0_37556444/article/details/86494791 kubeadm--init]
! URxvt*reverseVideo:                  boolean
 
! URxvt*loginShell:                    boolean
[https://www.jianshu.com/p/c01ba5bd1359?utm_campaign=maleskine&utm_content=note&utm_medium=seo_notes&utm_source=recommendation 安装k8s Master高可用集群]
! URxvt*multiClickTime:                number
 
! URxvt*jumpScroll:                    boolean
=What is new=
! URxvt*skipScroll:                    boolean
在Kubernetes 1.11中,CoreDNS已经实现了基于DNS的服务发现的GA,可作为kube-dns插件的替代品。这意味着CoreDNS将作为各种安装工具未来发布版本中的一个选项来提供。
! URxvt*pastableTabs:                  boolean
事实上,kubeadm团队选择将其作为Kubernetes 1.11的默认选项。
URxvt*scrollstyle:              rxvt
 
URxvt*scrollBar:                false
[https://blog.csdn.net/k8scaptain/article/details/81033095 CoreDNS正式GA | kube-dns与CoreDNS有何差异?]
! URxvt*scrollBar_right:                boolean
 
! URxvt*scrollBar_floating:            boolean
[https://juejin.im/post/5b46100de51d4519105d37e3 k8s集群配置使用coredns代替kube-dns]
! URxvt*scrollBar_align:                mode
 
! URxvt*thickness:                      number
=trouble=
URxvt*scrollTtyOutput:          true
==2020==
URxvt*scrollTtyKeypress:        true
<pre>
URxvt*scrollWithBuffer:        false
 
! URxvt*inheritPixmap:                  boolean
换个国内的源好了  例如aliyu 有空自己搞一个
URxvt*transparent:              true
 
! URxvt*tintColor:                      color
 
URxvt*shading:                  15
[kubelet-check] Initial timeout of 40s passed.
! URxvt*blurRadius:                    HxV
 
! URxvt*fading:                        number
Unfortunately, an error has occurred:
! URxvt*fadeColor:                      color
timed out waiting for the condition
! URxvt*utmpInhibit:                    boolean
 
URxvt*urgentOnBell:            false
This error is likely caused by:
URxvt*visualBell:              false
- The kubelet is not running
! URxvt*mapAlert:                      boolean
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
! URxvt*meta8:                          boolean
 
! URxvt*mouseWheelScrollPage:          boolean
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
! URxvt*tripleclickwords:              boolean
- 'systemctl status kubelet'
! URxvt*insecure:                      boolean
- 'journalctl -xeu kubelet'
! URxvt*cursorUnderline:                boolean
 
! URxvt*cursorBlink:                    boolean
Additionally, a control plane component may have crashed or exited when started by the container runtime.
! URxvt*pointerBlank:                  boolean
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
URxvt*background:              [92]#000000
Here is one example how you may list all Kubernetes containers running in docker:
URxvt*foreground:               #73A13C
- 'docker ps -a | grep kube | grep -v pause'
URxvt*color0:        #000000
Once you have found the failing container, you can inspect its logs with:
URxvt*color1:        #CC0000
- 'docker logs CONTAINERID'
URxvt*color2:        #4E9A06
error execution phase wait-control-plane: couldn't initialize a Kubern
URxvt*color3:        #C4A000
To see the stack trace of this error execute with --v=5 or higher
URxvt*color4:        #3465A4
URxvt*color5:        #75507B
URxvt*color6:        #06989A
URxvt*color7:        #D3D7CF
URxvt*color8:        #709080
URxvt*color9:        #EF2929
URxvt*color10:        #8AE234
URxvt*color11:        #FCE94F
URxvt*color12:        #729FCF
URxvt*color13:        #AD7FA8
URxvt*color14:        #34E2E2
URxvt*color15:        #EEEEEC
! URxvt*colorBD:                        color
! URxvt*colorIT:                        color
! URxvt*colorUL:                        color
! URxvt*colorRV:                        color
! URxvt*underlineColor:                color
! URxvt*scrollColor:                    color
! URxvt*troughColor:                    color
! URxvt*highlightColor:                color
! URxvt*highlightTextColor:            color
! URxvt*cursorColor:                    color
! URxvt*cursorColor2:                  color
! URxvt*pointerColor:                  color
! URxvt*pointerColor2:                  color
! URxvt*borderColor:                    color
! URxvt*iconFile:                      file
Xft.dpi:                        96
Xft.antialias:                  true
Xft.lcdfilter:                  lcddefault
Xft.rgba:                       rgb
Xft.hinting:                    true
Xft.hintstyle:                  hintslight
Xft.autohint:                  false
URxvt*font: xft:WenQuanYi Zen Hei Mono:size=14
! URxvt*boldFont:                      fontname
! URxvt*italicFont:                    fontname
! URxvt*boldItalicFont:                fontname
! URxvt*intensityStyles:               boolean
! URxvt*inputMethod:                    name
! URxvt*preeditType:                   style
! URxvt*imLocale:                      string
! URxvt*imFont:                        fontname
! URxvt*title:                          string
! URxvt*iconName:                      string
! URxvt*saveLines:                      number
! URxvt*buffered:                      boolean
URxvt*depth:                    32
! URxvt*visual:                        number
! URxvt*transient-for:                  windowid
! URxvt*override-redirect:              boolean
! URxvt*hold:                          boolean
! URxvt*externalBorder:                 number
! URxvt*internalBorder:                number
! URxvt*borderLess:                    boolean
URxvt*lineSpace:               0
URxvt*letterSpace:              -2
! URxvt*skipBuiltinGlyphs:              boolean
! URxvt*pointerBlankDelay:              number
! URxvt*backspacekey:                  string
! URxvt*deletekey:                      string
! URxvt*print-pipe:                    string
! URxvt*modifier:                      modifier
! URxvt*cutchars:                      string
! URxvt*answerbackString:              string
! URxvt*secondaryScreen:                boolean
! URxvt*secondaryScroll:                boolean
! URxvt*perl-lib:                       string
! URxvt*perl-eval:                      perl-eval
!URxvt*perl-ext-common:             
!URxvt*perl-ext:                     
URxvt*iso14755:                false
URxvt*iso14755_52:              false
! URxvt*xrm:                            string
! URxvt*keysym.sym:                    keysym
URxvt.keysym.Home: \033[1~
URxvt.keysym.End: \033[4~
URxvt.keysym.KP_Home: \033[1~
URxvt.keysym.KP_End:  \033[4~
! URxvt*background.border:              boolean
! URxvt*background.expr:                string
! URxvt*background.interval:           seconds
! URxvt*bell-command:                  string
! URxvt*kuake.hotkey:                  string
! URxvt*matcher.button:                string
! URxvt*matcher.launcher:              string
! URxvt*matcher.launcher.*:            string
! URxvt*matcher.pattern.*:              string
! URxvt*matcher.rend.*:                string
! URxvt*remote-clipboard.fetch:        string
! URxvt*remote-clipboard.store:        string
! URxvt*searchable-scrollback:         string
! URxvt*selection-autotransform.*:      string
! URxvt*selection-pastebin.cmd:        string
! URxvt*selection-pastebin.url:        string
! URxvt*selection.pattern-0:           string
! URxvt*tab-bg:                        colour
! URxvt*tab-fg:                        colour
! URxvt*tabbar-bg:                      colour
! URxvt*tabbar-fg:                      colour
! URxvt*url-launcher:                  string


urxvt -fn "xft:Ubuntu Mono:pixelsize=14,style=regular"
</pre>
</pre>
==Kubenetes服务不启动问题 ==
<pre>
重启系统后,发现kubelet服务没有起来,首先检查:
1.vim  /etc/fstab
#注释掉里面的swap一行。
2
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf 文件加入KUBELET_CGROUP_ARGS和KUBELET_EXTRA_ARGS参数,
3.注意在启动参数中也要加入,如下:
[Service]
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_CGROUP_ARGS $KUBELET_EXTRA_ARGS
systemctl daemon-reload
systemctl restart kubelet
</pre>
== trouble2 重启一下机器就坏==
<pre>
为什么重启一下机器就坏了呢
systemctl  status  kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
  Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
          └─10-kubeadm.conf
  Active: activating (auto-restart) (Result: exit-code) since Fri 2019-05-24 20:27:22 CST; 1s ago
    Docs: https://kubernetes.io/docs/home/
  Process: 1889 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (cod
Main PID: 1889 (code=exited, status=255)
kubelet.service: Main process exited, code=exited, status=255
journalctl -xefu kubelet
原来是kubelet 的cgroup dirver 与 docker的不一样。docker默认使用cgroupfs,keubelet 默认使用systemd。
简单地说就是在kubeadm init 之前kubelet会不断重启。
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
在集群初始化遇到问题,可以使用下面的命令进行清理后重新再初始化:
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
</pre>
[https://segmentfault.com/q/1010000015988481 K8S 初始化问题,有哪位遇到过,求解!timed out waiting for the condition]
== trouble3 ==
<pre>
evan@k8s-master:~$ docker pull gcr.io/kubernetes-helm/tiller:v2.14.0
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/create?fromImage=gcr.io%2Fkubernetes-helm%2Ftiller&tag=v2.14.0: dial unix /var/run/docker.sock: connect: permission denied
    sudo usermod -a -G docker $USER #普通用户添加天docker 组
</pre>
[https://www.cnblogs.com/informatics/p/8276172.html Docker pull Get Permission Denied]
==trouble 3 ==
docker  223.6.6.6 有时有问题 建议用8.8.4.4


=see also=
=see also=


[http://software.schmorp.de/pkg/rxvt-unicode.html Official site ]


[https://segmentfault.com/a/1190000020859490 URxvt 折腾笔记]
[http://www.jobbible.cn/2019/06/18/205/ 在国内使用阿里云镜像源搭建Kubernetes环境]
 
[https://www.jianshu.com/p/21a39ee86311?utm_campaign=maleskine&utm_content=note&utm_medium=seo_notes&utm_source=recommendation ubuntu 离线搭建Kubenetes1.9.2 集群]
 
[https://www.cnblogs.com/RainingNight/p/using-kubeadm-to-create-a-cluster-1-12.html 使用Kubeadm搭建Kubernetes(1.12.2)集群]
 
 
 
[https://www.debian.cn/archives/3076 Debian 9 使用kubeadm创建 k8s 集群(上)]
 
 
[https://www.debian.cn/archives/3078 Debian 9 使用kubeadm创建 k8s 集群(下)]
 
 
[https://www.linuxtechi.com/install-configure-kubernetes-ubuntu-18-04-ubuntu-18-10/ Install and Configure Kubernetes (k8s) 1.13 on Ubuntu 18.04 LTS / Ubuntu 18.10]
 
[https://www.kubernetes.org.cn/4387.html Ubuntu 18.04 离线安装Kubernetes v1.11.1]
 
[https://www.cnblogs.com/Leo_wl/p/8511902.html 安装部署 Kubernetes 集群]
 
 
 
 
 


https://www.kubernetes.org.cn/course/install


[https://addy-dclxvi.github.io/post/configuring-urxvt/ Configuring URxvt to Make It Usable and Less Ugly]


[https://energygreek.github.io/2021/03/25/my-urxvt/  urxvt配置]


[https://www.cnblogs.com/vachester/p/5649813.html Archlinux下i3wm与urxvt的配置]
[[Install and Configure Kubernetes (k8s) on ubuntu]]


[https://blog.csdn.net/gatieme/article/details/51892884  21款最佳Linux命令行终端工具]
[https://my.oschina.net/Kanonpy/blog/3006129 kubernetes部署(kubeadm国内镜像源)]




https://www.cnblogs.com/chjbbs/p/6389987.html


[https://blog.csdn.net/a130098300/article/details/78212361 rxvt介绍]
[https://zhuanlan.zhihu.com/p/83254020 Debian 10中部署Kubernetes]


[https://www.cnblogs.com/xuxinkun/p/11025020.html docker/kubernetes国内源/镜像源解决方式]
[https://cloud.tencent.com/developer/article/1461571 k8s常见报错解决--持续更新]


https://wiki.archlinux.org/index.php/Tilda


https://wiki.archlinux.org/index.php/Rxvt-unicode




[[category:desktop]]
[[category:k8s]] [[category:容器]] [[category: container]]

2022年6月23日 (四) 06:23的最新版本

建议使用2021年的 使用阿里云镜像源快速搭建kubernetes(k8s) on debian10

参考一下 然后写成脚本吧 ubuntu 使用阿里云镜像源快速搭建kubernetes 1.15.2集群

初始化时 指定aliyun  mirrors  本来是指定 1。17。1版本的 我改了新的
   kubeadm init --apiserver-advertise-address=192.168.11.184 --image-repository registry.aliyuncs.com/google_containers --ignore-preflight-errors=all  --kubernetes-version v1.17.3 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16

images 准备

#不过如果用aliyun mirrors 应该也不用理这个的 初始化是会自己拉
root@k8s-master:~# kubeadm config images list
W0304 10:05:03.567343   26153 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0304 10:05:03.567442   26153 validation.go:28] Cannot validate kubelet config - no validator is available
k8s.gcr.io/kube-apiserver:v1.17.3
k8s.gcr.io/kube-controller-manager:v1.17.3
k8s.gcr.io/kube-scheduler:v1.17.3
k8s.gcr.io/kube-proxy:v1.17.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5

由上面的 list 得知 
好办,我们先找台海外服务器,把相应的镜像拉下来,推到我们自己的私有仓库里,再pull,然后改tag。没有私有仓库也不要紧,我已经把1.15.1推到hub.docker.com了。

找一台能连接k8s.gcr.io的服务器:

docker pull k8s.gcr.io/kube-apiserver:v1.17.3
docker pull  k8s.gcr.io/kube-controller-manager:v1.17.3
docker pull  k8s.gcr.io/kube-scheduler:v1.17.3
docker pull  k8s.gcr.io/kube-proxy:v1.17.3
docker pull  k8s.gcr.io/pause:3.1
docker pull  k8s.gcr.io/etcd:3.4.3-0
docker pull  k8s.gcr.io/coredns:1.6.5


docker login 
evan886  evan2240881

docker tag  k8s.gcr.io/kube-apiserver:v1.17.3    evan886/kube-apiserver:v1.17.3

docker push  evan886/kube-apiserver:v1.17.3


docker tag  k8s.gcr.io/kube-controller-manager:v1.17.3 evan886/kube-controller-manager:v1.17.3
docker tag  k8s.gcr.io/kube-scheduler:v1.17.3   evan886/kube-scheduler:v1.17.3  




docker tag  k8s.gcr.io/kube-proxy:v1.17.3 evan886/kube-proxy:v1.17.3
docker tag  k8s.gcr.io/pause:3.1 evan886/pause:3.1
docker tag  k8s.gcr.io/etcd:3.4.3-0  evan886/etcd:3.4.3-0
docker tag  k8s.gcr.io/coredns:1.6.5  evan886/coredns:1.6.5 

#push 自己的hub.docker 
docker push  evan886/kube-apiserver:v1.17.3
docker push  evan886/kube-controller-manager:v1.17.3  
docker push evan886/kube-scheduler:v1.17.3
docker push evan886/kube-proxy:v1.17.3
docker push evan886/pause:3.1
docker push  evan886/etcd:3.4.3-0
docker push evan886/coredns:1.6.5
   
   
   #on k8s master 
 docker pull evan886/etcd:3.4.3-0
 docker pull  evan886/coredns:1.6.5
 docker pull evan886/kube-proxy:v1.17.3
   
   
   再tag 回k8s.gcr.io 
  
docker tag   evan886/kube-proxy:v1.17.3  k8s.gcr.io/kube-proxy:v1.17.3
docker tag  evan886/etcd:3.4.3-0  k8s.gcr.io/etcd:3.4.3-0  
docker tag  evan886/coredns:1.6.5   k8s.gcr.io/coredns:1.6.5  
   

info

cat >>/etc/hosts <<EOF
192.168.11.184  k8s-master
192.168.88.31  k8s-node1
192.168.88.32  k8s-node2
EOF


每台机器最少2GB内存,2CPUs。
集群中所有机器之间网络连接正常。
打开相应的端口,详见: [ Check required ports https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports]


Kubernetes要求集群中所有机器具有不同的Mac地址、产品uuid、Hostname。可以使用如下命令查看:

# UUID
 cat /sys/class/dmi/id/product_uuid

# Mac地址
 ip link

pre

搞个离线的吧 可以先学习着

K8s镜像

使用kubeadm离线部署kubernetesv1.9.0

Set Hostname and update hosts file

sudo hostnamectl set-hostname "k8s-master"
sudo hostnamectl set-hostname k8s-node1
sudo hostnamectl set-hostname k8s-node2

#Add the following lines in /etc/hosts file on all three systems,


如果不想翻墙 请参考使用Kubeadm搭建Kubernetes(1.12.2)集群

ins docker

Docker and docker-compose快速安装#on_debian

docker-compose 如果要直接用官方的二进制包

Docker入门

ins 在所有节点上

sudo swapoff -a;  sudo usermod -a -G docker $USER

sudo apt update && sudo apt install apt-transport-https ca-certificates curl software-properties-common -y

修改docker cgroup driver为systemd

根据文档CRI installation中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上docker的cgroup driver为systemd。

创建或修改/etc/docker/daemon.json
#直接加在最前面
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF


重启docker:

systemctl restart docker

docker info | grep Cgroup
Cgroup Driver: systemd


cat  /etc/docker/daemon.json
{
    "registry-mirrors": [
        "https://1nj0zren.mirror.aliyuncs.com",
        "https://docker.mirrors.ustc.edu.cn",
        "http://f1361db2.m.daocloud.io",
        "https://registry.docker-cn.com"
    ]

}
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

不然  troubles
[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ failed with error




#国内版
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.huaweicloud.com/kubernetes/apt/ kubernetes-xenial main
EOF



apt  install gnupg  -y  
 curl -s https://mirrors.huaweicloud.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
#4、更新索引文件并安装kubernetes
sudo apt update
sudo apt install -y kubeadm kubelet kubectl 








apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl


#init  之前不要启动
#systemctl start kubelet&&  systemctl enable kubelet.service

这些不改也无伤的 如果不是线上环境 2020
启动不了
原来是kubelet 的cgroup dirver 与 docker的不一样。docker默认使用cgroupfs,keubelet 默认使用systemd。

https://kubernetes.io/docs/setup/cri/

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

systemctl daemon-reload
systemctl restart docker







#这个有改的 18.04上成功了的
#vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

systemctl daemon-reload && systemctl restart kubelet &&  systemctl enable kubelet.service

不是线上的可以不改,在Master节点上配置 kubelet所需的cgroup驱动

使用 Docker 时,kubeadm 会自动为其检测 cgroup 驱动在运行时对 /var/lib/kubelet/kubeadm-flags.env 文件进行配置。 
如果您使用了不同的 CRI, 您得把 /etc/default/kubelet 文件中的 cgroup-driver 位置改为对应的值,像这样:

KUBELET_EXTRA_ARGS=--cgroup-driver=<value>

这个文件将会被 kubeadm init 和 kubeadm join 用于为 kubelet 获取 额外的用户参数。

请注意,您只需要在您的 cgroup driver 不是 cgroupfs 时这么做,因为 cgroupfs 已经是 kubelet 的默认值了。

systemctl daemon-reload; systemctl restart kubelet #需要重启 kubelet:


/etc/systemd/system/kubelet.service.d/10-kubeadm.conf


This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
#me  2020
evan@k8s-master:~$ cat /var/lib/kubelet/kubeadm-flags.env 
KUBELET_KUBEADM_ARGS=--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --resolv-conf=/run/systemd/resolve/resolv.conf

初始化master

#可以用国内阿里节点 不用FQ了 
kubeadm init --apiserver-advertise-address=192.168.11.184 --image-repository registry.aliyuncs.com/google_containers --ignore-preflight-errors=all  --kubernetes-version v1.17.1 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16

 #14:25:52--14:47:55 kubelet 其实是没启动的 在init之前 
 kubeadm init   --apiserver-advertise-address=192.168.88.30  --pod-network-cidr=10.224.0.0/16 # --apiserver-advertise-address=masterip

kubeadm join 192.168.88.58:6443 --token fuwhe0.ro0c8u82u4xtmn8q \
    --discovery-token-ca-cert-hash sha256:83bd9c19486c44fde674f4ccf0a7382848cd7bfeff8c361d54e7a2955a4dbd60


Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf


另外有一个小技巧,在init的过程中,另开一个终端,运行

journalctl -f -u kubelet.service

可以查看具体是什么愿意卡住了
 

配置kubectl认证信息

cat  /etc/sudoers.d/evan
echo 'evan ALL=(ALL) NOPASSWD:NOPASSWD:ALL' > /etc/sudoers.d/evan

su - evan 
mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo "source <(kubectl completion bash)" >> ~/.bashrc
exit 

# 对于root用户 这省不能少 不然  #  kubectl  apply -f kube-flannel.yml  The connection to the server localhost:8080 was refused - did you specify the right host or port?

export KUBECONFIG=/etc/kubernetes/admin.conf
#也可以直接放到~/.bash_profile
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

安装pod网络on master

#普通用户 不要翻墙
 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 

添加节点

不要翻墙了 新起个窗口

  # on  all node 
kubeadm join 192.168.88.58:6443 --token fuwhe0.ro0c8u82u4xtmn8q \
    --discovery-token-ca-cert-hash sha256:83bd9c19486c44fde674f4ccf0a7382848cd7bfeff8c361d54e7a2955a4dbd60 



evan@k8s-master:~$ kubectl get nodes
NAME   STATUS     ROLES    AGE     VERSION
k8s    NotReady   master   5h12m   v1.14.2
u16    NotReady   <none>   106m    v1.14.2

evan@k8s-master:~$ kubectl get pod --all-namespaces
NAMESPACE     NAME                          READY   STATUS              RESTARTS   AGE
kube-system   coredns-fb8b8dccf-nprqq       0/1     Terminating         16         5h11m
kube-system   coredns-fb8b8dccf-qn85f       0/1     Pending             0          5m4s
kube-system   coredns-fb8b8dccf-sgtw4       0/1     Terminating         16         5h11m
kube-system   coredns-fb8b8dccf-wsnkg       0/1     Pending             0          5m5s
kube-system   etcd-k8s                      1/1     Running             0          5h11m
kube-system   kube-apiserver-k8s            1/1     Running             0          5h11m
kube-system   kube-controller-manager-k8s   1/1     Running             0          5h11m
kube-system   kube-flannel-ds-amd64-8vvn6   0/1     Init:0/1            0          107m
kube-system   kube-flannel-ds-amd64-q92vz   1/1     Running             0          112m
kube-system   kube-proxy-85vkt              0/1     ContainerCreating   0          107m
kube-system   kube-proxy-fr7lv              1/1     Running             0          5h11m
kube-system   kube-scheduler-k8s            1/1     Running             0          5h11m


evan@k8s-master:~$ kubectl describe pod  kube-proxy-85vkt  --namespace=kube-system
Name:               kube-proxy-85vkt
Namespace:          kube-system
Priority:           2000001000
PriorityClassName:  system-node-critical
Node:               u16/192.168.88.66
****

Events:
  Type     Reason                  Age                   From               Message
  ----     ------                  ----                  ----               -------
  Normal   Scheduled               109m                  default-scheduler  Successfully assigned kube-system/kube-proxy-85vkt to u16
  Normal   Pulling                 108m                  kubelet, u16       Pulling image "k8s.gcr.io/kube-proxy:v1.14.2"
  Normal   Pulled                  107m                  kubelet, u16       Successfully pulled image "k8s.gcr.io/kube-proxy:v1.14.2"
  Normal   Created                 107m                  kubelet, u16       Created container kube-proxy
  Normal   Started                 107m                  kubelet, u16       Started container kube-proxy
  Warning  FailedCreatePodSandBox  52m (x119 over 107m)  kubelet, u16       Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

放了一个晚上 早上还是坏的 突然打开已是好的了

evan@ubuntu18:~$ kubectl get pod --all-namespaces
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-2rbwc            1/1     Running   3          18h
kube-system   coredns-fb8b8dccf-67zc2            1/1     Running   3          18h
kube-system   etcd-ubuntu18                      1/1     Running   10         18h
kube-system   kube-apiserver-ubuntu18            1/1     Running   4          18h
kube-system   kube-controller-manager-ubuntu18   1/1     Running   5          18h
kube-system   kube-flannel-ds-amd64-b6bn8        1/1     Running   45         16h
kube-system   kube-flannel-ds-amd64-v9wxm        1/1     Running   46         16h
kube-system   kube-flannel-ds-amd64-zn4xd        1/1     Running   3          16h
kube-system   kube-proxy-d7pmb                   1/1     Running   4          18h
kube-system   kube-proxy-gcddr                   1/1     Running   0          16h
kube-system   kube-proxy-lv8cb                   1/1     Running   0          16h
kube-system   kube-scheduler-ubuntu18            1/1     Running   5          18h



master 也当作node  这里的master hostname 为	ubuntu18OB
evan@ubuntu18:~$ kubectl  taint node ubuntu18 node-role.kubernetes.io/master-
node/ubuntu18 untainted

#master only
kubectl  taint node ubuntu18 node-role.kubernetes.io/master="":NoSchedule

master 也当作node

 [root@master tomcat]# hostname
master
[root@master tomcat]# kubectl taint node master node-role.kubernetes.io/master-
node/master untainted 


下面的是不是可以不要翻墙了呢

chpater4 k8s architecture

#唯一不是容器形式运行的k8s 组件
evan@k8s-master:~$ sudo systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Mon 2019-05-27 07:26:18 UTC; 21min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 817 (kubelet)
    Tasks: 19 (limit: 3499)
   CGroup: /system.slice/kubelet.service
           └─817 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf -



在master节点上发起个创建应用请求 
这里我们创建个名为httpd-app的应用,镜像为httpd,有两个副本pod
evan@k8s-master:~$ kubectl run httpd-app --image=httpd --replicas=2
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/httpd-app created

evan@k8s-master:~$ kubectl get deployment
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
httpd-app   0/2     2            0           103s

evan@k8s-master:~$ kubectl get pods -o wide
NAME                         READY   STATUS              RESTARTS   AGE     IP       NODE        NOMINATED NODE   READINESS GATES
httpd-app-6df58645c6-bvg9w   0/1     ContainerCreating   0          2m10s   <none>   k8s-node1   <none>           <none>
httpd-app-6df58645c6-n9xdj   0/1     ContainerCreating   0          2m10s   <none>   k8s-node2   <none>           <none>

evan@k8s-master:~$ kubectl get pods -o wide
NAME                         READY   STATUS              RESTARTS   AGE     IP           NODE        NOMINATED NODE   READINESS GATES
httpd-app-6df58645c6-bvg9w   0/1     ContainerCreating   0          3m58s   <none>       k8s-node1   <none>           <none>
httpd-app-6df58645c6-n9xdj   1/1     Running             0          3m58s   10.224.1.2   k8s-node2   <none>           <none>
#OK了
evan@k8s-master:~$ kubectl get pods -o wide
NAME                         READY   STATUS    RESTARTS   AGE    IP           NODE        NOMINATED NODE   READINESS GATES
httpd-app-6df58645c6-bvg9w   1/1     Running   0          6m8s   10.224.2.3   k8s-node1   <none>           <none>
httpd-app-6df58645c6-n9xdj   1/1     Running   0          6m8s   10.224.1.2   k8s-node2   <none>           <none>

chapter 5 run apps

evan@k8s-master:~$ kubectl run nginx-deployment --image=nginx:1.7.9 --replicas=2
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx-deployment created

上面的命令将部署包含两个副本的 Deployment nginx-deployment,容器的 image 为 nginx:1.7.9。

等待一段时间
kubectl get deployment nginx-deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   2/2     2            2           36m


接下来我们用 kubectl describe deployment 了解更详细的信息

进阶

K8S 源码探秘 之 kubeadm init 执行流程分析

kubeadm--init

安装k8s Master高可用集群

What is new

在Kubernetes 1.11中,CoreDNS已经实现了基于DNS的服务发现的GA,可作为kube-dns插件的替代品。这意味着CoreDNS将作为各种安装工具未来发布版本中的一个选项来提供。 事实上,kubeadm团队选择将其作为Kubernetes 1.11的默认选项。

CoreDNS正式GA | kube-dns与CoreDNS有何差异?

k8s集群配置使用coredns代替kube-dns

trouble

2020

换个国内的源好了  例如aliyu 有空自己搞一个 


[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
	- 'docker ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubern
To see the stack trace of this error execute with --v=5 or higher

Kubenetes服务不启动问题

重启系统后,发现kubelet服务没有起来,首先检查:

 1.vim  /etc/fstab
#注释掉里面的swap一行。

2
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf 文件加入KUBELET_CGROUP_ARGS和KUBELET_EXTRA_ARGS参数,


3.注意在启动参数中也要加入,如下:
[Service]

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"

ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_CGROUP_ARGS $KUBELET_EXTRA_ARGS

systemctl daemon-reload
systemctl restart kubelet

trouble2 重启一下机器就坏

为什么重启一下机器就坏了呢

systemctl  status  kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Fri 2019-05-24 20:27:22 CST; 1s ago
     Docs: https://kubernetes.io/docs/home/
  Process: 1889 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (cod
 Main PID: 1889 (code=exited, status=255)



kubelet.service: Main process exited, code=exited, status=255


journalctl -xefu kubelet

原来是kubelet 的cgroup dirver 与 docker的不一样。docker默认使用cgroupfs,keubelet 默认使用systemd。


简单地说就是在kubeadm init 之前kubelet会不断重启。


[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'


在集群初始化遇到问题,可以使用下面的命令进行清理后重新再初始化:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/



K8S 初始化问题,有哪位遇到过,求解!timed out waiting for the condition

trouble3

evan@k8s-master:~$ docker pull gcr.io/kubernetes-helm/tiller:v2.14.0
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/create?fromImage=gcr.io%2Fkubernetes-helm%2Ftiller&tag=v2.14.0: dial unix /var/run/docker.sock: connect: permission denied


    sudo usermod -a -G docker $USER #普通用户添加天docker 组

Docker pull Get Permission Denied

trouble 3

docker  223.6.6.6 有时有问题 建议用8.8.4.4

see also

在国内使用阿里云镜像源搭建Kubernetes环境

ubuntu 离线搭建Kubenetes1.9.2 集群

使用Kubeadm搭建Kubernetes(1.12.2)集群


Debian 9 使用kubeadm创建 k8s 集群(上)


Debian 9 使用kubeadm创建 k8s 集群(下)


Install and Configure Kubernetes (k8s) 1.13 on Ubuntu 18.04 LTS / Ubuntu 18.10

Ubuntu 18.04 离线安装Kubernetes v1.11.1

安装部署 Kubernetes 集群




https://www.kubernetes.org.cn/course/install


Install and Configure Kubernetes (k8s) on ubuntu

kubernetes部署(kubeadm国内镜像源)


Debian 10中部署Kubernetes

docker/kubernetes国内源/镜像源解决方式 k8s常见报错解决--持续更新