页面“K8s镜像”与“Install and Configure Kubernetes (k8s) on ubuntu”之间的差异

来自linux中国网wiki
(页面间的差异)
跳到导航 跳到搜索
→‎evan
 
→‎info
 
第1行: 第1行:
=evan2010=
 
  
 +
[[K8s镜像]]
  
[https://www.jianshu.com/p/d6848c711436 k8s.gcr.io 国内镜像拉不下来的办法 ]
+
=info=
 +
<pre>
 +
这次是18.04  master 58; n1 59; n2 60 #Mon May 27 07:44:35 UTC 2019
 +
 
 +
 
 +
每台机器最少2GB内存,2CPUs。
 +
集群中所有机器之间网络连接正常。
 +
打开相应的端口,详见: [ Check required ports https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports]
 +
 
 +
 
 +
Kubernetes要求集群中所有机器具有不同的Mac地址、产品uuid、Hostname。可以使用如下命令查看:
 +
 
 +
# UUID
 +
cat /sys/class/dmi/id/product_uuid
 +
 
 +
# Mac地址
 +
ip link
 +
 
 +
# Hostname
 +
cat /etc/hostname
 +
 
 +
 
 +
 
 +
ubuntu 16.04
 +
 
 +
master 67  allon vbox
 +
node1  66  node2  65
 +
</pre>
 +
https://mirrors.163.com/ubuntu-releases/16.04.6/
  
 +
[[使用kubeadm离线部署kubernetesv1.9.0]]
 +
 +
=Set Hostname and update hosts file=
 
<pre>
 
<pre>
[root@master tomcat]# docker images
+
sudo hostnamectl set-hostname "k8s-master"
REPOSITORY                          TAG                IMAGE ID            CREATED            SIZE
+
sudo hostnamectl set-hostname k8s-node1
 +
sudo hostnamectl set-hostname k8s-node2
 +
 
 +
#Add the following lines in /etc/hosts file on all three systems,
 +
 
 +
192.168.88.30    k8s-master
 +
192.168.88.31    k8s-node1
 +
192.168.88.32    k8s-node2
 +
 
 +
 
 +
192.168.88.58    k8s-master #k8sumaster1
 +
192.168.88.59    k8s-node1 #k8sun1
 +
192.168.88.60    k8s-node2 #k8sun2
 +
 
  
calico/kube-controllers              v3.7.4              e67ede28cc7e        2 weeks ago        46.8MB
+
</pre>
k8s.gcr.io/kube-proxy                v1.15.0            d235b23c3570        3 weeks ago        82.4MB
+
[[Ubuntu配置网络和hostname]]
k8s.gcr.io/kube-apiserver            v1.15.0            201c7a840312        3 weeks ago        207MB
 
k8s.gcr.io/kube-controller-manager  v1.15.0            8328bb49b652        3 weeks ago        159MB
 
k8s.gcr.io/kube-scheduler            v1.15.0            2d3813851e87        3 weeks ago        81.1MB
 
  
quay.io/coreos/flannel              v0.11.0-amd64      ff281650a721        5 months ago        52.6MB
+
=翻墙=
k8s.gcr.io/coredns                  1.3.1              eb516548c180        6 months ago        40.3MB
+
[[Ubuntu利用shadowsocks和polipo终端翻墙]]
k8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        7 months ago        258MB
+
<pre>
k8s.gcr.io/pause                    3.1                da86e6ba6ca1        19 months ago      742kB
 
  
calico/node                          v3.7.4              84b65b552a8f        2 weeks ago        155MB
+
cat /etc/profile
calico/cni                          v3.7.4              203668d151b2        2 weeks ago        135MB
+
#这个如何自启动加载呢 不然 notready
gcr.io/kubernetes-helm/tiller        v2.14.1             ac22eb1f780e        5 weeks ago        94.2MB
+
export http_proxy="http://127.0.0.1:8123/"
 +
export https_proxy=$http_proxy
 +
#export no_proxy="localhost,127.0.0.1,192.168.88.58,10.96.0.0,10.224.0.0"
 +
export no_proxy="localhost,127.0.0.1,192.168.88.58,10.96.0.0,10.224.0.0,10.224.*"
 +
</pre>
  
2. 从Registry中拉取镜像
+
如果不想翻墙 请参考[https://www.cnblogs.com/RainingNight/p/using-kubeadm-to-create-a-cluster-1-12.html 使用Kubeadm搭建Kubernetes(1.12.2)集群]
$ sudo docker pull registry.cn-shenzhen.aliyuncs.com/evan886/k8s:[镜像版本号]
 
3. 将镜像推送到Registry
 
$ sudo docker login --username=linuxsa886 registry.cn-shenzhen.aliyuncs.com
 
$ sudo docker tag [ImageId] registry.cn-shenzhen.aliyuncs.com/evan886/k8s:[镜像版本号]
 
$ sudo docker push registry.cn-shenzhen.aliyuncs.com/evan886/k8s:[镜像版本号]
 
sudo docker push registry.cn-shenzhen.aliyuncs.com/evan886/k8s:debian_tomcatv1
 
  
kubectl v1.15.0
+
=ins docker=
 +
apt-get install docker.io -y #only 4 ubuntu
  
sudo docker tag e67ede28cc7e  registry.cn-shenzhen.aliyuncs.com/evan886/k8s:kube-controllers
+
==debian9 or 10==
sudo docker tag d235b23c3570 registry.cn-shenzhen.aliyuncs.com/evan886/k8s:kube-proxy
 
sudo docker tag 201c7a840312  registry.cn-shenzhen.aliyuncs.com/evan886/k8s:kube-apiserver
 
sudo docker tag 8328bb49b652 registry.cn-shenzhen.aliyuncs.com/evan886/k8s:kube-controller-manager 
 
sudo docker tag 2d3813851e87 registry.cn-shenzhen.aliyuncs.com/evan886/k8s:kube-scheduler
 
sudo docker tag ff281650a721 registry.cn-shenzhen.aliyuncs.com/evan886/k8s:flannel 
 
sudo docker tag eb516548c180 registry.cn-shenzhen.aliyuncs.com/evan886/k8s:coredns
 
sudo docker tag 2c4adeb21b4f registry.cn-shenzhen.aliyuncs.com/evan886/k8s:etcd
 
sudo docker tag da86e6ba6ca1 registry.cn-shenzhen.aliyuncs.com/evan886/k8s:pause
 
sudo docker tag  registry.cn-shenzhen.aliyuncs.com/evan886/k8s:
 
sudo docker tag  registry.cn-shenzhen.aliyuncs.com/evan886/k8s:
 
  
 +
===.Install using the repository on debian ===
 +
<pre>
  
#push
+
apt install software-properties-common
sudo docker push registry.cn-shenzhen.aliyuncs.com/evan886/k8s:kube-controllers
 
sudo docker push registry.cn-shenzhen.aliyuncs.com/evan886/k8s:kube-proxy
 
sudo docker push registry.cn-shenzhen.aliyuncs.com/evan886/k8s:kube-apiserver
 
sudo docker push registry.cn-shenzhen.aliyuncs.com/evan886/k8s:kube-controller-manager 
 
sudo docker push  registry.cn-shenzhen.aliyuncs.com/evan886/k8s:kube-scheduler
 
sudo docker push registry.cn-shenzhen.aliyuncs.com/evan886/k8s:flannel 
 
sudo docker push registry.cn-shenzhen.aliyuncs.com/evan886/k8s:coredns
 
sudo docker push 2c4adeb21b4f registry.cn-shenzhen.aliyuncs.com/evan886/k8s:etcd
 
sudo docker push da86e6ba6ca1 registry.cn-shenzhen.aliyuncs.com/evan886/k8s:pause
 
  
 +
apt-get remove docker docker-engine docker.io containerd runc
  
 +
sudo apt-get install \
 +
    apt-transport-https \
 +
    ca-certificates \
 +
    curl \
 +
    gnupg2 \
 +
    software-properties-common
  
kubect 有好多用法
+
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
  cp            复制 files 和 directories 到 containers 和从容器中复制 files 和
 
  
 +
sudo apt-key fingerprint 0EBFCD88
 +
sudo add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/debian \
 +
  $(lsb_release -cs) \
 +
  stable"
 +
apt-get update
 +
sudo apt-get install docker-ce docker-ce-cli containerd.io
 
</pre>
 
</pre>
 +
https://docs.docker.com/install/linux/docker-ce/debian/
  
=useage=
+
===2.install-from-a-package on debian===
==pull到所以节点==
+
 
 +
Go to https://download.docker.com/linux/debian/dists/, choose your Debian version, browse to pool/stable/, choose either amd64 or armhf, and download the .deb file for the Docker CE version you want to install.
 +
 
 +
I am  stretch so
 +
apt install libltdl7
 +
 
 +
http://mirrors.aliyun.com/docker-ce/linux/debian/dists/stretch/pool/stable/amd64/
 +
 
 +
 
 +
 
 +
 
 +
 
 +
[[Docker入门]]
 +
 
 +
=docker代理设置=
 
<pre>
 
<pre>
#pull
+
 
sudo docker pull  registry.cn-shenzhen.aliyuncs.com/evan886/k8s:kube-controllers
+
#不要少了开头的service 还要记得check一个代理成功不
sudo docker pull registry.cn-shenzhen.aliyuncs.com/evan886/k8s:kube-proxy
+
mkdir -p /etc/systemd/system/docker.service.d
sudo docker pull registry.cn-shenzhen.aliyuncs.com/evan886/k8s:kube-apiserver
+
vi /etc/systemd/system/docker.service.d/http-proxy.conf
sudo docker pull registry.cn-shenzhen.aliyuncs.com/evan886/k8s:kube-controller-manager 
+
 
sudo docker pull  registry.cn-shenzhen.aliyuncs.com/evan886/k8s:kube-scheduler
+
[Service]
sudo docker pull registry.cn-shenzhen.aliyuncs.com/evan886/k8s:flannel 
+
Environment="HTTPS_PROXY=http://127.0.0.1:8123/" "HTTP_PROXY=http://127.0.0.1:8123/" "NO_PROXY=localhost,127.0.0.1,192.168.88.67,10.96.0.0,10.224.0.0"
sudo docker pull registry.cn-shenzhen.aliyuncs.com/evan886/k8s:coredns
+
 
sudo docker pull 2c4adeb21b4f registry.cn-shenzhen.aliyuncs.com/evan886/k8s:etcd
+
#Environment="HTTP_PROXY=http://proxy.example.com:80/" "HTTPS_PROXY=http://proxy.example.com:80/""NO_PROXY=localhost,127.0.0.1,docker-registry.somecorporation.com"
sudo docker pull da86e6ba6ca1 registry.cn-shenzhen.aliyuncs.com/evan886/k8s:pause
+
 
 +
systemctl daemon-reload
 +
systemctl restart docker
 +
systemctl enable docker
 +
 
 +
systemctl show --property=Environment docker
 +
 
 +
 
 +
other
 +
evan@k8s-master:~$ sudo systemctl enable docker
 +
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
 +
Executing: /lib/systemd/systemd-sysv-install enable docker
 +
 
 
</pre>
 
</pre>
 +
[https://docs.docker.com/config/daemon/systemd/ docker http-proxy]
 +
 +
[https://www.jianshu.com/p/1cb70b8ea2d7 docker 代理设置]
  
==改tag为gcri==
+
[https://blog.frognew.com/2017/01/docker-http-proxy.html docker代理配置-透过代理服务器pull镜像]
 +
 
 +
[http://silenceper.com/blog/201809/over-the-wall-pull-docker-mirror/ docker pull 翻墙下载镜像]
 +
 
 +
[https://blog.csdn.net/northeastsqure/article/details/60143144 docker设置代理]
 +
 
 +
 
 +
[https://www.cnblogs.com/atuotuo/p/7298673.html docker - 设置HTTP/HTTPS 代理]
 +
 
 +
=ins 在所有节点上=
 
<pre>
 
<pre>
docker tag  registry.cn-shenzhen.aliyuncs.com/evan886/k8s:kube-controllers calico/kube-controllers:v3.7.4
 
...
 
  
 +
swapoff -a;  sudo usermod -a -G docker $USER
 +
 +
apt-get update && apt-get install -y apt-transport-https curl
 +
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
 +
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
 +
deb https://apt.kubernetes.io/ kubernetes-xenial main
 +
EOF
 +
apt-get update
 +
apt-get install -y kubelet kubeadm kubectl
 +
apt-mark hold kubelet kubeadm kubectl
 +
 +
 +
#init  之前不要启动
 +
#systemctl start kubelet&&  systemctl enable kubelet.service
 +
 +
 +
启动不了
 +
原来是kubelet 的cgroup dirver 与 docker的不一样。docker默认使用cgroupfs,keubelet 默认使用systemd。
 +
 +
https://kubernetes.io/docs/setup/cri/
 +
#这个有改的 18.04上成功了的
 +
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
 +
[Service]
 +
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
 +
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
 +
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
 +
 +
systemctl daemon-reload && systemctl restart kubelet &&  systemctl enable kubelet.service
 
</pre>
 
</pre>
==导出完全离线==
+
 
 +
=在 Master 节点上配置 kubelet 所需的 cgroup 驱动=
 
<pre>
 
<pre>
docker save -o kube-proxy.tar k8s.gcr.io/kube-proxy:v1.15.0
+
使用 Docker 时,kubeadm 会自动为其检测 cgroup 驱动在运行时对 /var/lib/kubelet/kubeadm-flags.env 文件进行配置。
 +
如果您使用了不同的 CRI, 您得把 /etc/default/kubelet 文件中的 cgroup-driver 位置改为对应的值,像这样:
 +
 
 +
KUBELET_EXTRA_ARGS=--cgroup-driver=<value>
 +
 
 +
这个文件将会被 kubeadm init 和 kubeadm join 用于为 kubelet 获取 额外的用户参数。
 +
 
 +
请注意,您只需要在您的 cgroup driver 不是 cgroupfs 时这么做,因为 cgroupfs 已经是 kubelet 的默认值了。
 +
 
 +
systemctl daemon-reload; systemctl restart kubelet #需要重启 kubelet:
 +
 
 +
#me
 +
evan@k8s-master:~$ cat /var/lib/kubelet/kubeadm-flags.env
 +
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --resolv-conf=/run/systemd/resolve/resolv.conf
  
 
</pre>
 
</pre>
[[category:k8s]]
+
 
 +
=初始化master=
 +
<pre> #14:25:52--14:47:55 kubelet 其实是没启动的 在init之前
 +
kubeadm init  --apiserver-advertise-address=192.168.88.30  --pod-network-cidr=10.224.0.0/16 # --apiserver-advertise-address=masterip
 +
 
 +
kubeadm join 192.168.88.58:6443 --token fuwhe0.ro0c8u82u4xtmn8q \
 +
    --discovery-token-ca-cert-hash sha256:83bd9c19486c44fde674f4ccf0a7382848cd7bfeff8c361d54e7a2955a4dbd60
 +
 
 +
 
 +
 
 +
 
 +
 
 +
Alternatively, if you are the root user, you can run:
 +
 
 +
export KUBECONFIG=/etc/kubernetes/admin.conf
 +
 
 +
 
 +
 
 +
另外有一个小技巧,在init的过程中,另开一个终端,运行
 +
 
 +
journalctl -f -u kubelet.service
 +
 
 +
可以查看具体是什么愿意卡住了
 +
 
 +
 
 +
</pre>
 +
 
 +
=配置kubectl认证信息=
 +
<pre>
 +
cat  /etc/sudoers.d/evan
 +
echo 'evan ALL=(ALL) NOPASSWD:NOPASSWD:ALL' > /etc/sudoers.d/evan
 +
 
 +
su - evan
 +
mkdir -p $HOME/.kube
 +
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 +
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 +
echo "source <(kubectl completion bash)" >> ~/.bashrc
 +
exit
 +
 
 +
# 对于root用户 这省不能少 不然  #  kubectl  apply -f kube-flannel.yml  The connection to the server localhost:8080 was refused - did you specify the right host or port?
 +
 
 +
export KUBECONFIG=/etc/kubernetes/admin.conf
 +
#也可以直接放到~/.bash_profile
 +
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile</pre>
 +
 
 +
=安装pod网络on master=
 +
#普通用户 不要翻墙
 +
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
 +
 
 +
=添加节点=
 +
不要翻墙了 新起个窗口
 +
<pre>
 +
kubeadm join 192.168.88.58:6443 --token fuwhe0.ro0c8u82u4xtmn8q \
 +
    --discovery-token-ca-cert-hash sha256:83bd9c19486c44fde674f4ccf0a7382848cd7bfeff8c361d54e7a2955a4dbd60
 +
 
 +
 
 +
 
 +
evan@k8s-master:~$ kubectl get nodes
 +
NAME  STATUS    ROLES    AGE    VERSION
 +
k8s    NotReady  master  5h12m  v1.14.2
 +
u16    NotReady  <none>  106m    v1.14.2
 +
 
 +
evan@k8s-master:~$ kubectl get pod --all-namespaces
 +
NAMESPACE    NAME                          READY  STATUS              RESTARTS  AGE
 +
kube-system  coredns-fb8b8dccf-nprqq      0/1    Terminating        16        5h11m
 +
kube-system  coredns-fb8b8dccf-qn85f      0/1    Pending            0          5m4s
 +
kube-system  coredns-fb8b8dccf-sgtw4      0/1    Terminating        16        5h11m
 +
kube-system  coredns-fb8b8dccf-wsnkg      0/1    Pending            0          5m5s
 +
kube-system  etcd-k8s                      1/1    Running            0          5h11m
 +
kube-system  kube-apiserver-k8s            1/1    Running            0          5h11m
 +
kube-system  kube-controller-manager-k8s  1/1    Running            0          5h11m
 +
kube-system  kube-flannel-ds-amd64-8vvn6  0/1    Init:0/1            0          107m
 +
kube-system  kube-flannel-ds-amd64-q92vz  1/1    Running            0          112m
 +
kube-system  kube-proxy-85vkt              0/1    ContainerCreating  0          107m
 +
kube-system  kube-proxy-fr7lv              1/1    Running            0          5h11m
 +
kube-system  kube-scheduler-k8s            1/1    Running            0          5h11m
 +
 
 +
 
 +
evan@k8s-master:~$ kubectl describe pod  kube-proxy-85vkt  --namespace=kube-system
 +
Name:              kube-proxy-85vkt
 +
Namespace:          kube-system
 +
Priority:          2000001000
 +
PriorityClassName:  system-node-critical
 +
Node:              u16/192.168.88.66
 +
****
 +
 
 +
Events:
 +
  Type    Reason                  Age                  From              Message
 +
  ----    ------                  ----                  ----              -------
 +
  Normal  Scheduled              109m                  default-scheduler  Successfully assigned kube-system/kube-proxy-85vkt to u16
 +
  Normal  Pulling                108m                  kubelet, u16      Pulling image "k8s.gcr.io/kube-proxy:v1.14.2"
 +
  Normal  Pulled                  107m                  kubelet, u16      Successfully pulled image "k8s.gcr.io/kube-proxy:v1.14.2"
 +
  Normal  Created                107m                  kubelet, u16      Created container kube-proxy
 +
  Normal  Started                107m                  kubelet, u16      Started container kube-proxy
 +
  Warning  FailedCreatePodSandBox  52m (x119 over 107m)  kubelet, u16      Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
 +
 
 +
放了一个晚上 早上还是坏的 突然打开已是好的了
 +
 
 +
evan@ubuntu18:~$ kubectl get pod --all-namespaces
 +
NAMESPACE    NAME                              READY  STATUS    RESTARTS  AGE
 +
kube-system  coredns-fb8b8dccf-2rbwc            1/1    Running  3          18h
 +
kube-system  coredns-fb8b8dccf-67zc2            1/1    Running  3          18h
 +
kube-system  etcd-ubuntu18                      1/1    Running  10        18h
 +
kube-system  kube-apiserver-ubuntu18            1/1    Running  4          18h
 +
kube-system  kube-controller-manager-ubuntu18  1/1    Running  5          18h
 +
kube-system  kube-flannel-ds-amd64-b6bn8        1/1    Running  45        16h
 +
kube-system  kube-flannel-ds-amd64-v9wxm        1/1    Running  46        16h
 +
kube-system  kube-flannel-ds-amd64-zn4xd        1/1    Running  3          16h
 +
kube-system  kube-proxy-d7pmb                  1/1    Running  4          18h
 +
kube-system  kube-proxy-gcddr                  1/1    Running  0          16h
 +
kube-system  kube-proxy-lv8cb                  1/1    Running  0          16h
 +
kube-system  kube-scheduler-ubuntu18            1/1    Running  5          18h
 +
 
 +
 
 +
 
 +
master 也当作node  这里的master hostname 为 ubuntu18OB
 +
evan@ubuntu18:~$ kubectl  taint node ubuntu18 node-role.kubernetes.io/master-
 +
node/ubuntu18 untainted
 +
 
 +
#master only
 +
kubectl  taint node ubuntu18 node-role.kubernetes.io/master="":NoSchedule
 +
 
 +
</pre>
 +
 
 +
=master 也当作node =
 +
<pre>
 +
[root@master tomcat]# hostname
 +
master
 +
[root@master tomcat]# kubectl taint node master node-role.kubernetes.io/master-
 +
node/master untainted </pre>
 +
 
 +
 
 +
=下面的是不是可以不要翻墙了呢=
 +
 
 +
=chpater4  k8s architecture=
 +
<pre>
 +
#唯一不是容器形式运行的k8s 组件
 +
evan@k8s-master:~$ sudo systemctl status kubelet.service
 +
● kubelet.service - kubelet: The Kubernetes Node Agent
 +
  Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
 +
  Drop-In: /etc/systemd/system/kubelet.service.d
 +
          └─10-kubeadm.conf
 +
  Active: active (running) since Mon 2019-05-27 07:26:18 UTC; 21min ago
 +
    Docs: https://kubernetes.io/docs/home/
 +
Main PID: 817 (kubelet)
 +
    Tasks: 19 (limit: 3499)
 +
  CGroup: /system.slice/kubelet.service
 +
          └─817 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf -
 +
 
 +
 
 +
 
 +
在master节点上发起个创建应用请求
 +
这里我们创建个名为httpd-app的应用,镜像为httpd,有两个副本pod
 +
evan@k8s-master:~$ kubectl run httpd-app --image=httpd --replicas=2
 +
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
 +
deployment.apps/httpd-app created
 +
 
 +
evan@k8s-master:~$ kubectl get deployment
 +
NAME        READY  UP-TO-DATE  AVAILABLE  AGE
 +
httpd-app  0/2    2            0          103s
 +
 
 +
evan@k8s-master:~$ kubectl get pods -o wide
 +
NAME                        READY  STATUS              RESTARTS  AGE    IP      NODE        NOMINATED NODE  READINESS GATES
 +
httpd-app-6df58645c6-bvg9w  0/1    ContainerCreating  0          2m10s  <none>  k8s-node1  <none>          <none>
 +
httpd-app-6df58645c6-n9xdj  0/1    ContainerCreating  0          2m10s  <none>  k8s-node2  <none>          <none>
 +
 
 +
evan@k8s-master:~$ kubectl get pods -o wide
 +
NAME                        READY  STATUS              RESTARTS  AGE    IP          NODE        NOMINATED NODE  READINESS GATES
 +
httpd-app-6df58645c6-bvg9w  0/1    ContainerCreating  0          3m58s  <none>      k8s-node1  <none>          <none>
 +
httpd-app-6df58645c6-n9xdj  1/1    Running            0          3m58s  10.224.1.2  k8s-node2  <none>          <none>
 +
#OK了
 +
evan@k8s-master:~$ kubectl get pods -o wide
 +
NAME                        READY  STATUS    RESTARTS  AGE    IP          NODE        NOMINATED NODE  READINESS GATES
 +
httpd-app-6df58645c6-bvg9w  1/1    Running  0          6m8s  10.224.2.3  k8s-node1  <none>          <none>
 +
httpd-app-6df58645c6-n9xdj  1/1    Running  0          6m8s  10.224.1.2  k8s-node2  <none>          <none>
 +
 
 +
</pre>
 +
 
 +
=下面 关闭ss docker 代理 polipo =
 +
 
 +
=chapter 5 run apps=
 +
<pre>
 +
evan@k8s-master:~$ kubectl run nginx-deployment --image=nginx:1.7.9 --replicas=2
 +
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
 +
deployment.apps/nginx-deployment created
 +
 
 +
上面的命令将部署包含两个副本的 Deployment nginx-deployment,容器的 image 为 nginx:1.7.9。
 +
 
 +
等待一段时间
 +
kubectl get deployment nginx-deployment
 +
NAME              READY  UP-TO-DATE  AVAILABLE  AGE
 +
nginx-deployment  2/2    2            2          36m
 +
 
 +
 
 +
接下来我们用 kubectl describe deployment 了解更详细的信息
 +
 
 +
</pre>
 +
 
 +
=等待=
 +
<pre>
 +
sudo  sslocal -c /root/shadowsocks.json -d start
 +
sslocal -c shadowsocks.json -d start
 +
sslocal -c shadowsocks.json -d start
 +
 
 +
</pre>
 +
=进阶=
 +
 
 +
[https://blog.csdn.net/shida_csdn/article/details/83176735 K8S 源码探秘 之 kubeadm init 执行流程分析]
 +
 
 +
[https://blog.csdn.net/m0_37556444/article/details/86494791 kubeadm--init]
 +
 
 +
[https://www.jianshu.com/p/c01ba5bd1359?utm_campaign=maleskine&utm_content=note&utm_medium=seo_notes&utm_source=recommendation 安装k8s Master高可用集群]
 +
 
 +
=What is new=
 +
在Kubernetes 1.11中,CoreDNS已经实现了基于DNS的服务发现的GA,可作为kube-dns插件的替代品。这意味着CoreDNS将作为各种安装工具未来发布版本中的一个选项来提供。
 +
事实上,kubeadm团队选择将其作为Kubernetes 1.11的默认选项。
 +
 
 +
[https://blog.csdn.net/k8scaptain/article/details/81033095 CoreDNS正式GA | kube-dns与CoreDNS有何差异?]
 +
 
 +
[https://juejin.im/post/5b46100de51d4519105d37e3 k8s集群配置使用coredns代替kube-dns]
 +
 
 +
=trouble=
 +
 
 +
==Kubenetes服务不启动问题 ==
 +
<pre>
 +
重启系统后,发现kubelet服务没有起来,首先检查:
 +
 
 +
1.vim  /etc/fstab
 +
#注释掉里面的swap一行。
 +
 
 +
2
 +
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf 文件加入KUBELET_CGROUP_ARGS和KUBELET_EXTRA_ARGS参数,
 +
 
 +
 
 +
3.注意在启动参数中也要加入,如下:
 +
[Service]
 +
 
 +
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
 +
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
 +
 
 +
ExecStart=
 +
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_CGROUP_ARGS $KUBELET_EXTRA_ARGS
 +
 
 +
systemctl daemon-reload
 +
systemctl restart kubelet
 +
</pre>
 +
== trouble2 重启一下机器就坏==
 +
<pre>
 +
为什么重启一下机器就坏了呢
 +
 
 +
systemctl  status  kubelet
 +
● kubelet.service - kubelet: The Kubernetes Node Agent
 +
  Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
 +
  Drop-In: /etc/systemd/system/kubelet.service.d
 +
          └─10-kubeadm.conf
 +
  Active: activating (auto-restart) (Result: exit-code) since Fri 2019-05-24 20:27:22 CST; 1s ago
 +
    Docs: https://kubernetes.io/docs/home/
 +
  Process: 1889 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (cod
 +
Main PID: 1889 (code=exited, status=255)
 +
 
 +
 
 +
 
 +
kubelet.service: Main process exited, code=exited, status=255
 +
 
 +
 
 +
journalctl -xefu kubelet
 +
 
 +
原来是kubelet 的cgroup dirver 与 docker的不一样。docker默认使用cgroupfs,keubelet 默认使用systemd。
 +
 
 +
 
 +
简单地说就是在kubeadm init 之前kubelet会不断重启。
 +
 
 +
 
 +
[kubelet-check] Initial timeout of 40s passed.
 +
 
 +
Unfortunately, an error has occurred:
 +
timed out waiting for the condition
 +
 
 +
This error is likely caused by:
 +
- The kubelet is not running
 +
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
 +
 
 +
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
 +
- 'systemctl status kubelet'
 +
- 'journalctl -xeu kubelet'
 +
 
 +
 
 +
在集群初始化遇到问题,可以使用下面的命令进行清理后重新再初始化:
 +
 
 +
kubeadm reset
 +
ifconfig cni0 down
 +
ip link delete cni0
 +
ifconfig flannel.1 down
 +
ip link delete flannel.1
 +
rm -rf /var/lib/cni/
 +
 
 +
 
 +
</pre>
 +
 
 +
 
 +
 
 +
[https://segmentfault.com/q/1010000015988481 K8S 初始化问题,有哪位遇到过,求解!timed out waiting for the condition]
 +
 
 +
== trouble3 ==
 +
<pre>
 +
evan@k8s-master:~$ docker pull gcr.io/kubernetes-helm/tiller:v2.14.0
 +
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/create?fromImage=gcr.io%2Fkubernetes-helm%2Ftiller&tag=v2.14.0: dial unix /var/run/docker.sock: connect: permission denied
 +
 
 +
 
 +
    sudo usermod -a -G docker $USER #普通用户添加天docker 组
 +
 
 +
</pre>
 +
[https://www.cnblogs.com/informatics/p/8276172.html Docker pull Get Permission Denied]
 +
 
 +
==trouble 3 ==
 +
docker  223.6.6.6 有时有问题 建议用8.8.4.4
 +
 
 +
=see also=
 +
 
 +
 
 +
[https://www.jianshu.com/p/21a39ee86311?utm_campaign=maleskine&utm_content=note&utm_medium=seo_notes&utm_source=recommendation ubuntu 离线搭建Kubenetes1.9.2 集群]
 +
 
 +
[https://www.cnblogs.com/RainingNight/p/using-kubeadm-to-create-a-cluster-1-12.html 使用Kubeadm搭建Kubernetes(1.12.2)集群]
 +
 
 +
 
 +
 
 +
[https://www.debian.cn/archives/3076 Debian 9 使用kubeadm创建 k8s 集群(上)]
 +
 
 +
 
 +
[https://www.debian.cn/archives/3078 Debian 9 使用kubeadm创建 k8s 集群(下)]
 +
 
 +
 
 +
[https://www.linuxtechi.com/install-configure-kubernetes-ubuntu-18-04-ubuntu-18-10/ Install and Configure Kubernetes (k8s) 1.13 on Ubuntu 18.04 LTS / Ubuntu 18.10]
 +
 
 +
[https://www.kubernetes.org.cn/4387.html Ubuntu 18.04 离线安装Kubernetes v1.11.1]
 +
 
 +
[https://www.cnblogs.com/Leo_wl/p/8511902.html 安装部署 Kubernetes 集群]
 +
 
 +
[[category:k8s]] [[category:容器]] [[category: container]]

2019年12月27日 (五) 07:39的版本

K8s镜像

info

这次是18.04  master 58; n1 59; n2 60 #Mon May 27 07:44:35 UTC 2019


每台机器最少2GB内存,2CPUs。
集群中所有机器之间网络连接正常。
打开相应的端口,详见: [ Check required ports https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports]


Kubernetes要求集群中所有机器具有不同的Mac地址、产品uuid、Hostname。可以使用如下命令查看:

# UUID
 cat /sys/class/dmi/id/product_uuid

# Mac地址
 ip link

# Hostname
 cat /etc/hostname



ubuntu 16.04

master 67  allon vbox 
node1   66  node2   65

https://mirrors.163.com/ubuntu-releases/16.04.6/

使用kubeadm离线部署kubernetesv1.9.0

Set Hostname and update hosts file

sudo hostnamectl set-hostname "k8s-master"
sudo hostnamectl set-hostname k8s-node1
sudo hostnamectl set-hostname k8s-node2

#Add the following lines in /etc/hosts file on all three systems,

192.168.88.30     k8s-master
192.168.88.31     k8s-node1
192.168.88.32    k8s-node2


192.168.88.58     k8s-master #k8sumaster1
192.168.88.59     k8s-node1 #k8sun1
192.168.88.60    k8s-node2 #k8sun2


Ubuntu配置网络和hostname

翻墙

Ubuntu利用shadowsocks和polipo终端翻墙


cat /etc/profile
#这个如何自启动加载呢 不然 notready
export http_proxy="http://127.0.0.1:8123/"
export https_proxy=$http_proxy
#export no_proxy="localhost,127.0.0.1,192.168.88.58,10.96.0.0,10.224.0.0"
 export no_proxy="localhost,127.0.0.1,192.168.88.58,10.96.0.0,10.224.0.0,10.224.*"
 

如果不想翻墙 请参考使用Kubeadm搭建Kubernetes(1.12.2)集群

ins docker

apt-get install docker.io -y #only 4 ubuntu

debian9 or 10

.Install using the repository on debian


apt  install software-properties-common

apt-get remove docker docker-engine docker.io containerd runc

sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg2 \
    software-properties-common

 curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -

 sudo apt-key fingerprint 0EBFCD88
 sudo add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/debian \
   $(lsb_release -cs) \
   stable"
 apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

https://docs.docker.com/install/linux/docker-ce/debian/

2.install-from-a-package on debian

Go to https://download.docker.com/linux/debian/dists/, choose your Debian version, browse to pool/stable/, choose either amd64 or armhf, and download the .deb file for the Docker CE version you want to install.

I am stretch so

apt install libltdl7

http://mirrors.aliyun.com/docker-ce/linux/debian/dists/stretch/pool/stable/amd64/



Docker入门

docker代理设置


#不要少了开头的service 还要记得check一个代理成功不
mkdir -p /etc/systemd/system/docker.service.d
vi /etc/systemd/system/docker.service.d/http-proxy.conf

[Service]
Environment="HTTPS_PROXY=http://127.0.0.1:8123/" "HTTP_PROXY=http://127.0.0.1:8123/" "NO_PROXY=localhost,127.0.0.1,192.168.88.67,10.96.0.0,10.224.0.0"

#Environment="HTTP_PROXY=http://proxy.example.com:80/" "HTTPS_PROXY=http://proxy.example.com:80/""NO_PROXY=localhost,127.0.0.1,docker-registry.somecorporation.com"

systemctl daemon-reload
systemctl restart docker 
systemctl enable docker

systemctl show --property=Environment docker


other
evan@k8s-master:~$ sudo systemctl enable docker 
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

docker http-proxy

docker 代理设置

docker代理配置-透过代理服务器pull镜像

docker pull 翻墙下载镜像

docker设置代理


docker - 设置HTTP/HTTPS 代理

ins 在所有节点上


swapoff -a;  sudo usermod -a -G docker $USER

apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl


#init  之前不要启动
#systemctl start kubelet&&  systemctl enable kubelet.service


启动不了
原来是kubelet 的cgroup dirver 与 docker的不一样。docker默认使用cgroupfs,keubelet 默认使用systemd。

https://kubernetes.io/docs/setup/cri/
#这个有改的 18.04上成功了的
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 
[Service]
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

systemctl daemon-reload && systemctl restart kubelet &&  systemctl enable kubelet.service

在 Master 节点上配置 kubelet 所需的 cgroup 驱动

使用 Docker 时,kubeadm 会自动为其检测 cgroup 驱动在运行时对 /var/lib/kubelet/kubeadm-flags.env 文件进行配置。 
如果您使用了不同的 CRI, 您得把 /etc/default/kubelet 文件中的 cgroup-driver 位置改为对应的值,像这样:

KUBELET_EXTRA_ARGS=--cgroup-driver=<value>

这个文件将会被 kubeadm init 和 kubeadm join 用于为 kubelet 获取 额外的用户参数。

请注意,您只需要在您的 cgroup driver 不是 cgroupfs 时这么做,因为 cgroupfs 已经是 kubelet 的默认值了。

systemctl daemon-reload; systemctl restart kubelet #需要重启 kubelet:

#me 
evan@k8s-master:~$ cat /var/lib/kubelet/kubeadm-flags.env 
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --resolv-conf=/run/systemd/resolve/resolv.conf

初始化master

 #14:25:52--14:47:55 kubelet 其实是没启动的 在init之前 
 kubeadm init   --apiserver-advertise-address=192.168.88.30  --pod-network-cidr=10.224.0.0/16 # --apiserver-advertise-address=masterip

kubeadm join 192.168.88.58:6443 --token fuwhe0.ro0c8u82u4xtmn8q \
    --discovery-token-ca-cert-hash sha256:83bd9c19486c44fde674f4ccf0a7382848cd7bfeff8c361d54e7a2955a4dbd60





Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf



另外有一个小技巧,在init的过程中,另开一个终端,运行

journalctl -f -u kubelet.service

可以查看具体是什么愿意卡住了


 

配置kubectl认证信息

cat  /etc/sudoers.d/evan
echo 'evan ALL=(ALL) NOPASSWD:NOPASSWD:ALL' > /etc/sudoers.d/evan

su - evan 
mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo "source <(kubectl completion bash)" >> ~/.bashrc
exit 

# 对于root用户 这省不能少 不然  #  kubectl  apply -f kube-flannel.yml  The connection to the server localhost:8080 was refused - did you specify the right host or port?

export KUBECONFIG=/etc/kubernetes/admin.conf
#也可以直接放到~/.bash_profile
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

安装pod网络on master

  1. 普通用户 不要翻墙
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

添加节点

不要翻墙了 新起个窗口

kubeadm join 192.168.88.58:6443 --token fuwhe0.ro0c8u82u4xtmn8q \
    --discovery-token-ca-cert-hash sha256:83bd9c19486c44fde674f4ccf0a7382848cd7bfeff8c361d54e7a2955a4dbd60 



evan@k8s-master:~$ kubectl get nodes
NAME   STATUS     ROLES    AGE     VERSION
k8s    NotReady   master   5h12m   v1.14.2
u16    NotReady   <none>   106m    v1.14.2

evan@k8s-master:~$ kubectl get pod --all-namespaces
NAMESPACE     NAME                          READY   STATUS              RESTARTS   AGE
kube-system   coredns-fb8b8dccf-nprqq       0/1     Terminating         16         5h11m
kube-system   coredns-fb8b8dccf-qn85f       0/1     Pending             0          5m4s
kube-system   coredns-fb8b8dccf-sgtw4       0/1     Terminating         16         5h11m
kube-system   coredns-fb8b8dccf-wsnkg       0/1     Pending             0          5m5s
kube-system   etcd-k8s                      1/1     Running             0          5h11m
kube-system   kube-apiserver-k8s            1/1     Running             0          5h11m
kube-system   kube-controller-manager-k8s   1/1     Running             0          5h11m
kube-system   kube-flannel-ds-amd64-8vvn6   0/1     Init:0/1            0          107m
kube-system   kube-flannel-ds-amd64-q92vz   1/1     Running             0          112m
kube-system   kube-proxy-85vkt              0/1     ContainerCreating   0          107m
kube-system   kube-proxy-fr7lv              1/1     Running             0          5h11m
kube-system   kube-scheduler-k8s            1/1     Running             0          5h11m


evan@k8s-master:~$ kubectl describe pod  kube-proxy-85vkt  --namespace=kube-system
Name:               kube-proxy-85vkt
Namespace:          kube-system
Priority:           2000001000
PriorityClassName:  system-node-critical
Node:               u16/192.168.88.66
****

Events:
  Type     Reason                  Age                   From               Message
  ----     ------                  ----                  ----               -------
  Normal   Scheduled               109m                  default-scheduler  Successfully assigned kube-system/kube-proxy-85vkt to u16
  Normal   Pulling                 108m                  kubelet, u16       Pulling image "k8s.gcr.io/kube-proxy:v1.14.2"
  Normal   Pulled                  107m                  kubelet, u16       Successfully pulled image "k8s.gcr.io/kube-proxy:v1.14.2"
  Normal   Created                 107m                  kubelet, u16       Created container kube-proxy
  Normal   Started                 107m                  kubelet, u16       Started container kube-proxy
  Warning  FailedCreatePodSandBox  52m (x119 over 107m)  kubelet, u16       Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

放了一个晚上 早上还是坏的 突然打开已是好的了

evan@ubuntu18:~$ kubectl get pod --all-namespaces
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-2rbwc            1/1     Running   3          18h
kube-system   coredns-fb8b8dccf-67zc2            1/1     Running   3          18h
kube-system   etcd-ubuntu18                      1/1     Running   10         18h
kube-system   kube-apiserver-ubuntu18            1/1     Running   4          18h
kube-system   kube-controller-manager-ubuntu18   1/1     Running   5          18h
kube-system   kube-flannel-ds-amd64-b6bn8        1/1     Running   45         16h
kube-system   kube-flannel-ds-amd64-v9wxm        1/1     Running   46         16h
kube-system   kube-flannel-ds-amd64-zn4xd        1/1     Running   3          16h
kube-system   kube-proxy-d7pmb                   1/1     Running   4          18h
kube-system   kube-proxy-gcddr                   1/1     Running   0          16h
kube-system   kube-proxy-lv8cb                   1/1     Running   0          16h
kube-system   kube-scheduler-ubuntu18            1/1     Running   5          18h



master 也当作node  这里的master hostname 为	ubuntu18OB
evan@ubuntu18:~$ kubectl  taint node ubuntu18 node-role.kubernetes.io/master-
node/ubuntu18 untainted

#master only
kubectl  taint node ubuntu18 node-role.kubernetes.io/master="":NoSchedule

master 也当作node

 [root@master tomcat]# hostname
master
[root@master tomcat]# kubectl taint node master node-role.kubernetes.io/master-
node/master untainted 


下面的是不是可以不要翻墙了呢

chpater4 k8s architecture

#唯一不是容器形式运行的k8s 组件
evan@k8s-master:~$ sudo systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Mon 2019-05-27 07:26:18 UTC; 21min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 817 (kubelet)
    Tasks: 19 (limit: 3499)
   CGroup: /system.slice/kubelet.service
           └─817 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf -



在master节点上发起个创建应用请求 
这里我们创建个名为httpd-app的应用,镜像为httpd,有两个副本pod
evan@k8s-master:~$ kubectl run httpd-app --image=httpd --replicas=2
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/httpd-app created

evan@k8s-master:~$ kubectl get deployment
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
httpd-app   0/2     2            0           103s

evan@k8s-master:~$ kubectl get pods -o wide
NAME                         READY   STATUS              RESTARTS   AGE     IP       NODE        NOMINATED NODE   READINESS GATES
httpd-app-6df58645c6-bvg9w   0/1     ContainerCreating   0          2m10s   <none>   k8s-node1   <none>           <none>
httpd-app-6df58645c6-n9xdj   0/1     ContainerCreating   0          2m10s   <none>   k8s-node2   <none>           <none>

evan@k8s-master:~$ kubectl get pods -o wide
NAME                         READY   STATUS              RESTARTS   AGE     IP           NODE        NOMINATED NODE   READINESS GATES
httpd-app-6df58645c6-bvg9w   0/1     ContainerCreating   0          3m58s   <none>       k8s-node1   <none>           <none>
httpd-app-6df58645c6-n9xdj   1/1     Running             0          3m58s   10.224.1.2   k8s-node2   <none>           <none>
#OK了
evan@k8s-master:~$ kubectl get pods -o wide
NAME                         READY   STATUS    RESTARTS   AGE    IP           NODE        NOMINATED NODE   READINESS GATES
httpd-app-6df58645c6-bvg9w   1/1     Running   0          6m8s   10.224.2.3   k8s-node1   <none>           <none>
httpd-app-6df58645c6-n9xdj   1/1     Running   0          6m8s   10.224.1.2   k8s-node2   <none>           <none>

下面 关闭ss docker 代理 polipo

chapter 5 run apps

evan@k8s-master:~$ kubectl run nginx-deployment --image=nginx:1.7.9 --replicas=2
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx-deployment created

上面的命令将部署包含两个副本的 Deployment nginx-deployment,容器的 image 为 nginx:1.7.9。

等待一段时间
kubectl get deployment nginx-deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   2/2     2            2           36m


接下来我们用 kubectl describe deployment 了解更详细的信息

等待

sudo  sslocal -c /root/shadowsocks.json -d start
 sslocal -c shadowsocks.json -d start
sslocal -c shadowsocks.json -d start

进阶

K8S 源码探秘 之 kubeadm init 执行流程分析

kubeadm--init

安装k8s Master高可用集群

What is new

在Kubernetes 1.11中,CoreDNS已经实现了基于DNS的服务发现的GA,可作为kube-dns插件的替代品。这意味着CoreDNS将作为各种安装工具未来发布版本中的一个选项来提供。 事实上,kubeadm团队选择将其作为Kubernetes 1.11的默认选项。

CoreDNS正式GA | kube-dns与CoreDNS有何差异?

k8s集群配置使用coredns代替kube-dns

trouble

Kubenetes服务不启动问题

重启系统后,发现kubelet服务没有起来,首先检查:

 1.vim  /etc/fstab
#注释掉里面的swap一行。

2
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf 文件加入KUBELET_CGROUP_ARGS和KUBELET_EXTRA_ARGS参数,


3.注意在启动参数中也要加入,如下:
[Service]

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"

ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_CGROUP_ARGS $KUBELET_EXTRA_ARGS

systemctl daemon-reload
systemctl restart kubelet

trouble2 重启一下机器就坏

为什么重启一下机器就坏了呢

systemctl  status  kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Fri 2019-05-24 20:27:22 CST; 1s ago
     Docs: https://kubernetes.io/docs/home/
  Process: 1889 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (cod
 Main PID: 1889 (code=exited, status=255)



kubelet.service: Main process exited, code=exited, status=255


journalctl -xefu kubelet

原来是kubelet 的cgroup dirver 与 docker的不一样。docker默认使用cgroupfs,keubelet 默认使用systemd。


简单地说就是在kubeadm init 之前kubelet会不断重启。


[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'


在集群初始化遇到问题,可以使用下面的命令进行清理后重新再初始化:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/



K8S 初始化问题,有哪位遇到过,求解!timed out waiting for the condition

trouble3

evan@k8s-master:~$ docker pull gcr.io/kubernetes-helm/tiller:v2.14.0
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/create?fromImage=gcr.io%2Fkubernetes-helm%2Ftiller&tag=v2.14.0: dial unix /var/run/docker.sock: connect: permission denied


    sudo usermod -a -G docker $USER #普通用户添加天docker 组

Docker pull Get Permission Denied

trouble 3

docker  223.6.6.6 有时有问题 建议用8.8.4.4

see also

ubuntu 离线搭建Kubenetes1.9.2 集群

使用Kubeadm搭建Kubernetes(1.12.2)集群


Debian 9 使用kubeadm创建 k8s 集群(上)


Debian 9 使用kubeadm创建 k8s 集群(下)


Install and Configure Kubernetes (k8s) 1.13 on Ubuntu 18.04 LTS / Ubuntu 18.10

Ubuntu 18.04 离线安装Kubernetes v1.11.1

安装部署 Kubernetes 集群