Rook安装

来自linux中国网wiki
Evan讨论 | 贡献2023年10月9日 (一) 06:25的版本 →‎references
跳到导航 跳到搜索

info

Kubernetes: v1.24.10

Prerequisites

Raw devices (no partitions or formatted filesystems)
This requires lvm2 to be installed on the host. To avoid this dependency, you can create a single full-disk partition on the disk (see below)
Raw partitions (no formatted filesystem)
Persistent Volumes available from a storage class in block mode

个人建议有三个工作节点或者更加多

首先为三个节点添加三个未格式化的硬盘 ,我用的是proxmox  直接在ui添加 也行 选择每个节点vm 
Hard Disk-- add  我这里每个添加30G 

root@ubuntu-200430-2:~# lsblk  -f 
NAME    FSTYPE         LABEL           UUID                                 FSAVAIL FSUSE% MOUNTPOINT
loop0   squashfs                                                                  0   100% /snap/core20/2015
loop1   squashfs                                                                  0   100% /snap/lxd/24061
loop2   squashfs                                                                  0   100% /snap/snapd/20092
sda                                                                                        
├─sda1  ext4           cloudimg-rootfs 6482e2e0-f3de-4156-bc4b-49b189543d07   12.1G    55% /
├─sda14                                                                                    
└─sda15 vfat           UEFI            AD2F-0F3A                              98.3M     6% /boot/efi
sdb     #这一行空  安装后 为 下面的样子

sdb     ceph_bluestore 安装后                                                                    

ins and config

git clone --single-branch --branch v1.6.3 https://github.com/rook/rook.git
#天朝你懂的 所以官司的很多东西拉不来的
cd rook/cluster/examples/kubernetes/ceph

修改Rook CSI镜像地址,原本的地址可能是gcr的镜像,但是gcr的镜像无法被国内访问,所以需要同步gcr的镜像到阿里云镜像仓库,本文档已经为大家完成同步,可以直接修改如下:

vim operator.yaml
改为:

ROOK_CSI_REGISTRAR_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-node-driver-registrar:v2.0.1"
 ROOK_CSI_RESIZER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-resizer:v1.0.1"
 ROOK_CSI_PROVISIONER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-provisioner:v2.0.4"
 ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-snapshotter:v4.0.0"
 ROOK_CSI_ATTACHER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-attacher:v3.0.2"

还是operator文件,新版本rook默认关闭了自动发现容器的部署,可以找到ROOK_ENABLE_DISCOVERY_DAEMON改成true即可:

##Deploy the Rook Operator   下面的可以参考官方了
cd cluster/examples/kubernetes/ceph
kubectl create -f crds.yaml -f common.yaml -f operator.yaml

# verify the rook-ceph-operator is in the `Running` state before proceeding
kubectl -n rook-ceph get pod



创建ceph集群

kubectl create -f cluster.yaml

#过程慢慢等
root@k8smater:~/rook# kubectl -n rook-ceph get pod
NAME                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-6nwx9                            3/3     Running     0          126m
csi-cephfsplugin-kn56d                            3/3     Running     0          126m
csi-cephfsplugin-provisioner-77c7f8f674-9kcn8     6/6     Running     0          126m
csi-cephfsplugin-provisioner-77c7f8f674-vjlp5     6/6     Running     0          126m
csi-cephfsplugin-xgsrj                            3/3     Running     0          126m
csi-rbdplugin-mxbrx                               3/3     Running     0          126m
csi-rbdplugin-p9z5m                               3/3     Running     0          126m
csi-rbdplugin-provisioner-5b78cf5f59-f66cf        6/6     Running     0          126m
csi-rbdplugin-provisioner-5b78cf5f59-t6tjh        6/6     Running     0          126m
csi-rbdplugin-pvkzs                               3/3     Running     0          126m
rook-ceph-crashcollector-work1-77c4b4995-ww7qn    1/1     Running     0          108m
rook-ceph-crashcollector-work2-85fb65c68d-qkgwx   1/1     Running     0          123m
rook-ceph-crashcollector-work3-555d56d445-9lkr4   1/1     Running     0          123m
rook-ceph-mds-myfs-a-6c9fc8cff5-z66fv             1/1     Running     0          108m
rook-ceph-mds-myfs-b-54f94bbd77-h5jnk             1/1     Running     0          108m
rook-ceph-mgr-a-784bb444f8-m48lw                  1/1     Running     0          123m
rook-ceph-mon-a-5699bddbbf-t9g59                  1/1     Running     0          123m
rook-ceph-mon-b-5874d4cc7c-pc8f6                  1/1     Running     0          123m
rook-ceph-mon-c-6c479888cd-rr57r                  1/1     Running     0          123m
rook-ceph-operator-76948f86f7-s69j5               1/1     Running     0          140m
rook-ceph-osd-0-5fd689dc96-ng6c6                  1/1     Running     0          123m
rook-ceph-osd-1-7bcfcf7656-gcrn2                  1/1     Running     0          123m
rook-ceph-osd-prepare-work1-wvvx4                 0/1     Completed   0          122m
rook-ceph-osd-prepare-work2-9pcxl                 0/1     Completed   0          122m
rook-ceph-osd-prepare-work3-wsd8m                 0/1     Completed   0          122m
rook-ceph-tools-897d6797f-k9tcs                   1/1     Running     0          118m
rook-discover-4q7tw                               1/1     Running     0          137m
rook-discover-hrtrl                               1/1     Running     0          137m
rook-discover-nzw6l                               1/1     Running     0          137m
root@k8smater:~/rook# 

To verify that the cluster is in a healthy state, connect to the Rook toolbox and run the ceph status command.

All mons should be in quorum
A mgr should be active
At least one OSD should be active
If the health is not HEALTH_OK, the warnings or errors should be investigated

安装ceph 客户端工具

这个文件的路径还是在ceph文件夹下

kubectl  create -f toolbox.yaml -n rook-ceph

待容器Running后,即可执行相关命令:

[root@k8s-master]# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
[root@rook-ceph-tools-fc5f9586c-m2wf5 /]# ceph status
  cluster:
    id:     9016340d-7f90-4634-9877-aadc927c4e81
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim
            clock skew detected on mon.b

  services:
    mon: 3 daemons, quorum a,b,c (age 3m)
    mgr: a(active, since 44m)
    osd: 3 osds: 3 up (since 38m), 3 in (since 38m)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 57 GiB / 60 GiB avail
    pgs:     1 active+clean

配置ceph dashboard

默认的ceph 已经安装的ceph-dashboard,但是其svc地址为service clusterIP,并不能被外部访问

kubectl apply -f dashboard-external-https.yaml

创建NodePort类型就可以被外部访问了

 root@k8smater:~/rook/rook/cluster/examples/kubernetes/ceph# kubectl get svc -n rook-ceph|grep dashboard
rook-ceph-mgr-dashboard                  ClusterIP   10.233.209.181   <none>        8443/TCP            7m50s
rook-ceph-mgr-dashboard-external-https   NodePort    10.233.72.80     <none>        8443:30491/TCP      18s

浏览器访问(master01-ip换成自己的集群ip):

https://master01-ip:32529/#/login?returnUrl=%2Fdashboard

https://192.168.10.31:30491/#/login?returnUrl=%2Fdashboard

用户名默认是admin,至于密码可以通过以下代码获取:

kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}"|base64 --decode && echo
iS_cB+NB>|g^"$b&dv:xxxxxxxx

references

https://rook.io/docs/rook/v1.6/ceph-quickstart.html

https://zhuanlan.zhihu.com/p/387531212


https://developer.aliyun.com/article/873291

https://www.cnblogs.com/mgsudu/p/16162617.html

https://juejin.cn/post/7079046477577715742

https://www.cnblogs.com/deny/p/14229987.html