Helm基础

来自linux中国网wiki
跳到导航 跳到搜索

pre

Pay attention. 注意 请见 Helm3

Installing the Helm Client

From the Binary Releases

wget -c https://get.helm.sh/helm-v2.14.1-linux-amd64.tar.gz
tar xvf  helm-v2.14.1-linux-amd64.tar.gz 
 mv  linux-amd64/helm  /usr/local/bin/helm

helm version 
Client: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
Error: could not find tiller

wget -c https://get.helm.sh/helm-v2.9.1-linux-amd64.tar.gz
 tar xvf  helm-v2.9.1-linux-amd64.tar.gz
mv  linux-amd64/helm  /usr/local/bin/helm

下载地址

https://helm.sh/docs/using_helm/#installing-helm

Installing Tiller Helm server

warr

 on centos7  helm3 成功
 on ubuntu 18 helm2 成功


evan@ubuntu18:~$ helm version 
Client: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}

Using RBAC Authorization

cat >> rbac-config.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
EOF


kubectl create -f rbac-config.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created


#但是好像没看到上面的 tiller 哦
 kubectl get pod --all-namespaces

#这个么
kubectl get pods -n kube-system  -l app=helm

kubernetes Using RBAC Authorization

init tiller

helm init --service-account tiller

#没有指定 ServiceAccount 的,所以我们需要给 Tiller 打上一个 ServiceAccount 的补丁:

kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'


发现原来的node2 上有tiller 14.0的images  马上把helm 也改成14.0 并关了node1 哈哈 原来的14.1 等了两个小时呀,其实可以先pull images 下来为妙

[root@master tmp]# helm init --service-account tiller #有时要多尝试几次
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!


[root@master tmp]# kubectl get pods -n kube-system  -l app=helm
NAME                            READY   STATUS    RESTARTS   AGE
tiller-deploy-64f5b9869-xxbm5   1/1     Running   0          4m55s

国内安装tiller

#国内安装
kubectl apply -f rbac-config.yaml 

helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.9.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

helm init --service-account tiller -i hub.tencentyun.com/helm/tiller:v2.11.0 

[root@node2 ~]# docker pull hub.tencentyun.com/helm/tiller:v2.14.1
Error response from daemon: manifest for hub.tencentyun.com/helm/tiller:v2.14.1 not found

查看是否授权成功


[root@master ~]# kubectl get deploy --namespace kube-system   tiller-deploy  --output yaml|grep  serviceAccount
      serviceAccount: tiller
      serviceAccountName: tiller

today

 kubectl edit deploy tiller-deploy -n kube-system

      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        - name: KUBERNETES_MASTER
          value: 192.168.88.30:8080

最后两行是新加 #on centos7

https://blog.csdn.net/lindao99/article/details/79977702

calico 网络

[root@master tmp]# helm init --service-account tiller
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation


[root@master tmp]# helm  repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Unable to get an update from the "stable" chart repository (https://kubernetes-charts.storage.googleapis.com):
	Get https://kubernetes-charts.storage.googleapis.com/index.yaml: read tcp 192.168.88.30:57744->172.217.160.112:443: read: connection reset by peer
Update Complete.

#应试不是pod网络的问题 应该是那个权限的问题
[root@master tmp]#  helm install  stable/redis --dry-run
Error: no available release name found



#国内网安装
helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.9.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts



https://helm.sh/docs/using_helm/#role-based-access-control

usage

国内源

helm repo add stable  http://mirror.azure.cn/kubernetes/charts/

停止代理fq

vim  /etc/profile
 .   /etc/profile

 vi /usr/lib/systemd/system/docker.service
 systemctl daemon-reload
systemctl restart docker
 systemctl stop  privoxy
 systemctl stop  privoxy

开启代理fq

vim  /etc/profile
 .   /etc/profile

systemctl start shadowsocks.service
systemctl start privoxy


 vi /usr/lib/systemd/system/docker.service
 systemctl daemon-reload
 systemctl restart docker

常用命令



[root@master tmp]# helm repo list
NAME  	URL                                             
stable	https://kubernetes-charts.storage.googleapis.com
local 	http://127.0.0.1:8879/charts  


helm  repo update
[root@master tmp]# helm  repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.


[root@master tmp]# helm search  redis 
NAME                            	CHART VERSION	APP VERSION	DESCRIPTION                                                 
stable/prometheus-redis-exporter	2.0.2        	0.28.0     	Prometheus exporter for Redis metrics                       
stable/redis                    	8.0.16       	5.0.5      	Open source, advanced key-value store. It is often referr...
stable/redis-ha                 	3.6.1        	5.0.5      	Highly available Kubernetes implementation of Redis         
stable/sensu                    	0.2.3        	0.28       	Sensu monitoring framework backed by the Redis transport  






# helm repo list 
NAME  	URL                                      
stable	http://mirror.azure.cn/kubernetes/charts/
local 	http://127.0.0.1:8879/charts             
[root@master tmp]# helm search mysql 
NAME                            	CHART VERSION	APP VERSION	DESCRIPTION                                                 
stable/mysql                    	1.2.2        	5.7.14     	Fast, reliable, scalable, and easy to use open-source rel...
stable/mysqldump                	2.4.2        	2.4.1      	A Helm chart to help backup MySQL databases using mysqldump 
stable/prometheus-mysql-exporter	0.5.0        	v0.11.0    	A Helm chart for prometheus mysql exporter with cloudsqlp...
stable/percona                  	1.1.0        	5.7.17     	free, fully compatible, enhanced, open source drop-in rep...
stable/percona-xtradb-cluster   	1.0.0        	5.7.19     	free, fully compatible, enhanced, open source drop-in rep...
stable/phpmyadmin               	2.2.5        	4.9.0-1    	phpMyAdmin is an mysql administration frontend              
stable/gcloud-sqlproxy          	0.6.1        	1.11       	DEPRECATED Google Cloud SQL Proxy                           
stable/mariadb                  	6.5.4        	10.3.16    	Fast, reliable, scalable, and easy to use open-source rel...






evan@ubuntu18:~$ kubectl get  service 
NAME                             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hoping-chipmunk-redis-headless   ClusterIP   None            <none>        6379/TCP         11h
hoping-chipmunk-redis-master     ClusterIP   10.96.160.69    <none>        6379/TCP         11h
hoping-chipmunk-redis-slave      ClusterIP   10.109.192.53   <none>        6379/TCP         11h








uninstall


 kubectl delete  -f rbac-config.yaml

 kubectl delete deployment tiller-deploy --namespace  kube-system
 kubectl delete deployment tiller-deploy -n kube-system
deployment.extensions "tiller-deploy" deleted

过一阵就会不见
helm reset

$helm reset   或 $helm reset -f(强制删除k8s集群上的pod.)
 当要移除helm init创建的目录等数据时,执行helm reset --remove-helm-home

kubectl get pod --all-namespaces

重装,再init一次
rm -rf  /root/.helm/
# helm init --service-account tiller   --upgrade
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been upgraded to the current version.

install on helm

helm --debug install stable/redis 

evan@ubuntu18:~$ helm install stable/redis
NAME:   hoping-chipmunk
LAST DEPLOYED: Sun Jul  7 14:53:22 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                          DATA  AGE
hoping-chipmunk-redis         3     3s

NOTES:
** Please be patient while the chart is being deployed **
Redis can be accessed via port 6379 on the following DNS names from within your cluster:

hoping-chipmunk-redis-master.default.svc.cluster.local for read/write operations
hoping-chipmunk-redis-slave.default.svc.cluster.local for read-only operations


To get your password run:

    export REDIS_PASSWORD=$(kubectl get secret --namespace default hoping-chipmunk-redis -o jsonpath="{.data.redis-password}" | base64 --decode)


To connect to your Redis server:

1. Run a Redis pod that you can use as a client:

   kubectl run --namespace default hoping-chipmunk-redis-client --rm --tty -i --restart='Never' \
    --env REDIS_PASSWORD=$REDIS_PASSWORD \
   --image docker.io/bitnami/redis:5.0.5-debian-9-r36 -- bash

sudo apt install redis-tools

2. Connect using the Redis CLI:
   redis-cli -h hoping-chipmunk-redis-master -a $REDIS_PASSWORD
   redis-cli -h hoping-chipmunk-redis-slave -a $REDIS_PASSWORD

To connect to your database from outside the cluster execute the following commands:

    kubectl port-forward --namespace default svc/hoping-chipmunk-redis 6379:6379 &
    redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORD



evan@ubuntu18:~$ redis-cli -h hoping-chipmunk-redis-slave -a $REDIS_PASSWORD
Could not connect to Redis at hoping-chipmunk-redis-slave:6379: Temporary failure in name resolution
Could not connect to Redis at hoping-chipmunk-redis-slave:6379: Temporary failure in name resolution
not connected>



charts


evan@ubuntu18:~$ helm create hello-helm
Creating hello-helm

evan@ubuntu18:~$ tree hello-helm
hello-helm
├── charts
├── Chart.yaml
├── templates
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── ingress.yaml
│   ├── NOTES.txt
│   ├── service.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml

我们通过查看 templates 目录下面的 deployment.yaml 文件可以看出默认创建的 Chart 是一个 nginx 服务,具体的每个文件是干什么用的,我们可以前往 [https://docs.helm.sh/developing_charts/#charts Helm 官方文档] 进行查看


安装下这个 Chart 

helm install ./hello-helm
NAME:   voting-beetle
LAST DEPLOYED: Mon Jul  8 02:50:04 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME                      READY  UP-TO-DATE  AVAILABLE  AGE
voting-beetle-hello-helm  0/1    0           0          1s

==> v1/Pod(related)
NAME                                       READY  STATUS             RESTARTS  AGE
voting-beetle-hello-helm-7f44bd998d-mcxsl  0/1    ContainerCreating  0         0s

==> v1/Service
NAME                      TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)  AGE
voting-beetle-hello-helm  ClusterIP  10.107.95.218  <none>       80/TCP   1s


NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=hello-helm,app.kubernetes.io/instance=voting-beetle" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl port-forward $POD_NAME 8080:80





evan@ubuntu18:~$ kubectl port-forward $POD_NAME 8080:80
error: unable to forward port because pod is not running. Current status=Pending


 echo $POD_NAME
voting-beetle-hello-helm-7f44bd998d-mcxsl



~]# helm list 
NAME              	REVISION	UPDATED                 	STATUS  	CHART           	APP VERSION	NAMESPACE
tailored-armadillo	1       	Wed Jul 10 20:09:47 2019	DEPLOYED	hello-helm-0.1.0	1.0        	default  


https://www.kubernetes.org.cn/3435.html

helm 应用

使用Helm在DigitalOcean Kubernetes上设置Nginx入口

her

van@ubuntu18:~$ tiller log
[main] 2019/07/01 07:38:02 Starting Tiller v2.14.0 (tls=false)
[main] 2019/07/01 07:38:02 GRPC listening on :44134
[main] 2019/07/01 07:38:02 Probes listening on :44135
[main] 2019/07/01 07:38:02 Storage driver is ConfigMap
[main] 2019/07/01 07:38:02 Max history per release is 0


log


[root@master tmp]# kubectl get pods -n kube-system  -l app=helm
NAME                             READY   STATUS    RESTARTS   AGE
tiller-deploy-788b748dc8-d6pbz   1/1     Running   0          12m
[root@master tmp]# kubectl logs tiller-deploy-788b748dc8-d6pbz --namespace=kube-system 


[storage/driver] 2019/07/06 02:09:50 get: failed to get "running-seastar.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/running-seastar.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2019/07/06 02:09:50 info: generated name running-seastar is taken. Searching again.
[storage] 2019/07/06 02:09:50 getting release "wise-tiger.v1"
[storage/driver] 2019/07/06 02:10:20 get: failed to get "wise-tiger.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/wise-tiger.v1: dial tcp 10.96.0.1:443: i/o timeout
[tiller] 2019/07/06 02:10:20 info: generated name wise-tiger is taken. Searching again.
[storage] 2019/07/06 02:10:20 getting release "winning-peahen.v1"
[storage/driver] 2019/07/06 02:10:50 get: failed to get "winning-peahen.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/winning-peahen.v1: dial tcp 10.96.0.1:443: i/o timeout


10.96.0  这段的IP docker and  profile  也FQ 在原来的no 列表去掉  

trouble

Error: no available release name found


helm install  stable/redis --dry-run
Error: no available release name found

tiller log 如下
[root@master tmp]# helm ls
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 10.96.0.1:443: i/o timeout


解决办法
#其实一开始我也有用这个的哦
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

上面这一步非常重要,不然后面在使用 Helm 的过程中可能出现 Error: no available release name found 的错误信息。

解说
创建了 tiller 的 ServceAccount 后还没完,因为我们的 Tiller 之前已经就部署成功了,而且是没有指定 ServiceAccount 的,所以我们需要给 Tiller 打上一个 ServiceAccount 的补丁


中间还买了两台vulrt 机器 确定了不是网络FQ问题


pull err

kubectl  describe pod  tiller-deploy-7bf78cdbf7-j65sx   --namespace=kube-system


Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  31m                  default-scheduler  Successfully assigned kube-system/tiller-deploy-7bf78cdbf7-j65sx to node2
  Normal   Pulling    29m (x4 over 31m)    kubelet, node2     Pulling image "gcr.io/kubernetes-helm/tiller:v2.14.1"
  Warning  Failed     28m (x4 over 30m)    kubelet, node2     Failed to pull image "gcr.io/kubernetes-helm/tiller:v2.14.1": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Failed     28m (x4 over 30m)    kubelet, node2     Error: ErrImagePull
  Normal   BackOff    28m (x6 over 30m)    kubelet, node2     Back-off pulling image "gcr.io/kubernetes-helm/tiller:v2.14.1"
  Warning  Failed     58s (x120 over 30m)  kubelet, node2     Error: ImagePullBackOff

可见 node2上的docker 代理有问题 打开FQ 和代理 搞定

还是不成功呢
[root@node2 ~]# docker pull gcr.io/kubernetes-helm/tiller:v2.14.1


[root@node2 ~]# docker pull gcr.io/kubernetes-helm/tiller:v2.14.1
Error response from daemon: Get https://gcr.io/v2/: net/http: TLS handshake timeout



other err


disabled" RBAC 的问题么

If you get no available release error in helm, it is likely due to the RBAC issue.

[root@master tmp]#  helm install  stable/redis --dry-run
Error: no available release name found


建了 tiller 的 ServceAccount 后还没完,因为我们的 Tiller 之前已经就部署成功了,而且是没有指定 ServiceAccount 的,所以我们需要给 Tiller 打上一个 ServiceAccount 的补丁:

$ kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
上面这一步非常重要,不然后面在使用 Helm 的过程中可能出现 Error: no available release name found 的错误信息。





kubectl logs tiller-deploy-788b748dc8-d6pbz --namespace=kube-system

[storage] 2019/07/06 03:14:35 getting release "plundering-orangutan.v1"
[storage/driver] 2019/07/06 03:15:05 get: failed to get "plundering-orangutan.v1": Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/plundering-orangutan.v1: dial tcp 10.96.0.1:443: i/o timeout



helm --debug install stable/redis

感觉不通
[root@master ~]# curl   https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/coiled-elk.v1 -k 
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "configmaps \"coiled-elk.v1\" is forbidden: User \"system:anonymous\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"",
  "reason": "Forbidden",
  "details": {
    "name": "coiled-elk.v1",
    "kind": "configmaps"
  },
  "code": 403










[root@master tmp]#  kubectl --namespace=kube-system edit deployment/tiller-deploy 
Edit cancelled, no changes made.
[root@master tmp]# kubectl create serviceaccount --namespace kube-system tiller
Error from server (AlreadyExists): serviceaccounts "tiller" already exists
[root@master tmp]# kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
[root@master tmp]# kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
deployment.extensions/tiller-deploy patched (no change)



root@master tmp]#  helm install  stable/redis 
Error: failed to download "stable/redis" (hint: running `helm repo update` may help)

Solved] Helm – Error: no available release name found

helm 3

install helm 3

wget -c  https://get.helm.sh/helm-v3.0.0-alpha.1-linux-amd64.tar.gz

tar xvf helm-v3.0.0-alpha.1-linux-amd64.tar.gz 
 mv linux-amd64/helm  /usr/local/bin/
 helm init


下载地址


v3.helm Using Helm


初试 Helm 3

usage

# helm install stable/redis --generate-name
NAME: redis-1562510931
LAST DEPLOYED: 2019-07-07 22:48:59.717787296 +0800 CST m=+8.635515663
NAMESPACE: default
STATUS: deployed

NOTES:
** Please be patient while the chart is being deployed **
Redis can be accessed via port 6379 on the following DNS names from within your cluster:

redis-1562510931-master.default.svc.cluster.local for read/write operations
redis-1562510931-slave.default.svc.cluster.local for read-only operations


To get your password run:

    export REDIS_PASSWORD=$(kubectl get secret --namespace default redis-1562510931 -o jsonpath="{.data.redis-password}" | base64 --decode)

To connect to your Redis server:

1. Run a Redis pod that you can use as a client:

   kubectl run --namespace default redis-1562510931-client --rm --tty -i --restart='Never' \
    --env REDIS_PASSWORD=$REDIS_PASSWORD \
   --image docker.io/bitnami/redis:5.0.5-debian-9-r36 -- bash

2. Connect using the Redis CLI:
   redis-cli -h redis-1562510931-master -a $REDIS_PASSWORD
   redis-cli -h redis-1562510931-slave -a $REDIS_PASSWORD

To connect to your database from outside the cluster execute the following commands:

    kubectl port-forward --namespace default svc/redis-1562510931 6379:6379 &
    redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORD

ubuntu history

1384  helm reset 
 1385  cat >> rbac-config.yaml <<EOF
 1386  apiVersion: v1
 1387  kind: ServiceAccount
 1388  metadata:
 1389    name: tiller
 1390    namespace: kube-system
 1391  ---
 1392  apiVersion: rbac.authorization.k8s.io/v1
 1393  kind: ClusterRoleBinding
 1394  metadata:
 1395    name: tiller
 1396  roleRef:
 1397    apiGroup: rbac.authorization.k8s.io
 1398    kind: ClusterRole
 1399    name: cluster-admin
 1400  subjects:
 1401    - kind: ServiceAccount
 1402      name: tiller
 1403      namespace: kube-system
 1404  EOF
 1405  kubectl create -f rbac-config.yaml
 1406  kubectl get pods -n kube-system  -l app=helm
 1407  kubectl get deploy --namespace kube-system   tiller-deploy  --output yaml|grep  serviceAccount

 mv   .helm .helmbak
 1410  helm init --service-account tiller
 1411  helm version 
 1412  kubectl get pods -n kube-system  -l app=helm

 1416  kubectl  describe pod tiller-deploy-598f58dd45-57cmf  --namespace=kube-system
 1417  kubectl get pods -n kube-system  -l app=helm
 1418  helm reset 

 1420  rm  -rf .helm
 1421  kubectl apply -f rbac-config.yaml

 1424  helm init --service-account tiller
 1425  kubectl get pods -n kube-system  -l app=helm

 1427  kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
 1428  kubectl get pods -n kube-system 
 1429  helm  search redis 
 1430  helm install stable/redis


see also

https://github.com/helm/helm/blob/master/docs/install.md


Helm安装和项目使用 diy chart

Helm - Kubernetes服务编排的利器


Kubernetes 应用管理工具 Helm 使用指南


helm安装配置

helm 快速入门

Helm简介,安装与使用

10.k8s安装helm

Kubernetes 1.13 完全入门 (14) Helm 包管理器 一

Kubernetes Helm 初体验


Helm 入门指南

helm部署和使用

Kubernetes使用Helm部署应用

TKE搭建helm搭建

Helm 容器应用包管理工具安装记录

Error: no available release name found

kubernetes helm 入门


Using Helm to deploy to Kubernetes