kubernetes快速入门17-Helm

kubernetes快速入门17-Helm

Helm的官方地址为:https://helm.sh/

Helm能帮助我们管理复杂的kubernetes应用,需要部署的应用使用Chart来进行定义和管理

Helm的核心术语:

Chart: 一个helm程序包,包含运行一个kubernetes应用的资产清单
Repository: Chart仓库,是一个https/http服务器,存放一个个的chart
Release: 一个Chart部署到kubernetes集群后的一个实例

Helm的程序架构:

helm: 客户端命令,管理本地的chart仓库,管理chart,与Tiller服务端进行交互,发送chat,实现实例安装、查询、卸载等操作
Tiller: 服务端,一般运行在kubernetes集群中,接收helm发来的chart与config,合并后与API Server交互部署相应程序而生成release

Helm安装

参考信息:https://v2.helm.sh/docs/using_helm/#installing-helm

安装包下载: https://github.com/helm/helm/releases,目前稳定版本为v2.16.9

k8s@node01:~/install_k8s/helm$ wget https://get.helm.sh/helm-v2.16.9-linux-amd64.tar.gz
k8s@node01:~/install_k8s/helm$ tar xf helm-v2.16.9-linux-amd64.tar.gz
k8s@node01:~/install_k8s/helm$ sudo mv linux-amd64/helm /usr/local/bin/
k8s@node01:~/install_k8s/helm$ helm --help # 查看命令帮助信息

Tiller安装

Tiller是helm的服务器端,它可以安装在本地,并配置为与远程kubernetes集群通信,但通常托管在kubernetes集群内部。在使用helm init初始化命令时helm会自动联系API Server让其安装部署相应的pod,此时helm客户端就需要有操作k8s集群的证书及相应的RBAC的授权;Helm将通过阅读您的Kubernetes配置文件(通常为$HOME/.kube/config)来找出在哪里安装Tiller,所以通常安装在k8s的master节点,helm通过获取config配置文件中的current-context:xxx来决定把相应的tiller的pod安装在哪个k8s集群中,要了解将Tiller安装到哪个群集,可以运行 kubectl config current-contextkubectl cluster-info来查看。

helm使用kubectl使用的认证信息可以与API Server进行交互,那接下来就得给helm进行RBAC授权,让其拥有cluster-admin的集群角色就能拥有k8s集群的大部分权限,参考文档:https://v2.helm.sh/docs/using_helm/#role-based-access-control

k8s@node01:~/install_k8s/helm$ cat rbac-config.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
k8s@node01:~/install_k8s/helm$ kubectl apply -f rbac-config.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
k8s@node01:~/install_k8s/helm$ kubectl get sa -n kube-system | grep tiller  # sa用户创建成功
tiller                               1         15s

# 初始化
k8s@node01:~/install_k8s/helm$ helm init --service-account tiller --history-max 200
Creating /home/k8s/.helm
Creating /home/k8s/.helm/repository
Creating /home/k8s/.helm/repository/cache
Creating /home/k8s/.helm/repository/local
Creating /home/k8s/.helm/plugins
Creating /home/k8s/.helm/starters
Creating /home/k8s/.helm/cache/archive
Creating /home/k8s/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/k8s/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure ‘allow unauthenticated users‘ policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://v2.helm.sh/docs/securing_installation/

# 查看相应的Pod是否运行,相应的image是在gcr.io上的,需要相应的代理才能正常拉取镜像
k8s@node01:~/install_k8s/helm$ kubectl get pods -n kube-system
...
tiller-deploy-66495b8df5-8thn2             1/1     Running   0          32s

# 查看helm和tiller相应的版本
k8s@node01:~/install_k8s/helm$ helm version
Client: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:"clean"}

因tiller的镜像是在gcr.io上的,需要自己搭建代理让helm init拉取镜像时走代理,而其他本地流量不走代理,这里需要设置2个环境变量

$ export HTTPS_PROXY=‘http://x.x.x.x:PORT‘  # 代理地址
$ export NO_PROXY=‘127.0.0.0/8,192.168.101.0/24‘ # 不走代理的网段

Helm使用

官方Repositry仓库地址:https://hub.kubeapps.com/https://hub.helm.sh/

# 更新仓库
k8s@node01:~/install_k8s/helm$ helm repo update  
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.
helm常用命令:
  release管理:
    install  安装chart为release
    delete --purge  删除release,名称可重用
    upgrade/rollback  升级/回滚release
    list     列出已安装的release
    histroy  release历史
    status   获取release状态信息

 chart管理:
        create  创建一个chart
        fetch   下载一个chart到本地并解压
        get     仅下载
        inspect  查看chart详细信息
        verify   较验chart
        package  打包chart
        lint     检验自定义的chart语法

Chart

当安装一个chart时,相应的chart会被下载到$HOME/.heml/cache/archive目录下

k8s@node01:~$ ls .helm/cache/archive/
jenkins-2.5.0.tgz  memcached-3.2.3.tgz
k8s@node01:~$ cd .helm/cache/archive/
k8s@node01:~/.helm/cache/archive$ tar xf memcached-3.2.3.tgz
k8s@node01:~/.helm/cache/archive$ tree memcached
memcached   # chart名称
├── Chart.yaml  # chart的元数据信息,描述该chart的自身信息
├── README.md  # 自述说明文件
├── templates   # 模板,定义该chart的资产清单,使用了go语言的模板语言
│?? ├── _helpers.tpl
│?? ├── NOTES.txt  # 提示信息,helm status时的显示信息
│?? ├── pdb.yaml
│?? ├── servicemonitor.yaml
│?? ├── statefulset.yaml
│?? └── svc.yaml
└── values.yaml  # 值文件,给模板中的变量传递值,这里是默认的值文件

自定义Chart

# 创建一个自定义chart的框架
k8s@node01:~/helm$ pwd
/home/k8s/helm
k8s@node01:~/helm$ helm create mychart
k8s@node01:~/helm$ tree mychart/ # 要想得心应手的编写自定义chart,需要了解go的模板语言
mychart/
├── charts
├── Chart.yaml
├── templates
│?? ├── deployment.yaml
│?? ├── _helpers.tpl
│?? ├── ingress.yaml
│?? ├── NOTES.txt
│?? ├── serviceaccount.yaml
│?? ├── service.yaml
│?? └── tests
│??     └── test-connection.yaml
└── values.yaml

3 directories, 9 files

# 打包chart
8s@node01:~/helm$ helm package mychart
Successfully packaged chart and saved it to: /home/k8s/helm/mychart-0.1.0.tgz
k8s@node01:~/helm$ ls
mychart  mychart-0.1.0.tgz
# 启用本地仓库
k8s@node01:~/helm$ helm serve
Regenerating index. This may take a moment.
Now serving you on 127.0.0.1:8879
# 查询自定义chart
k8s@node01:~$ helm search mychart
NAME            CHART VERSION   APP VERSION DESCRIPTION
local/mychart   0.1.0           1.0         A Helm chart for Kubernetes

Helm方式安装EFK

EFK,这里的F指的不是Filebet,而是Fluentd,也是日志收集工具。所使用到的chart请到https://hub.kubeapps.com/charts进行搜索。

因测试环境硬件配置不足,不足以EFK环境,故没有完成以下部署,只作简单说明。

部署elasticsearch

表参考:https://hub.kubeapps.com/charts/elastic/elasticsearch

k8s@node01:~/helm/efk$ kubectl create ns efk  # 单独创建一个名称空间

k8s@node01:~/helm/efk/elasticsearch$ helm repo add elastic https://helm.elastic.co
k8s@node01:~/helm/efk/elasticsearch$ helm fetch elastic/elasticsearch --version 7.8.1
k8s@node01:~/helm/efk/elasticsearch$ helm install elastic/elasticsearch --version 7.8.1 --name es1 -f values.yaml --namespace efk

部署fluentd

fluentd以daemonset的方式运行在各个节点,收集/var/log/目录下的所有日志文件,收集后写入到es中。

https://hub.kubeapps.com/charts/gitlab/fluentd-elasticsearch

helm repo add gitlab https://charts.gitlab.io/
helm fetch gitlab/fluentd-elasticsearch --version 6.2.4

部署kibana

kibana的版本需要与elasticsearch的版本一致。

https://hub.kubeapps.com/charts/elastic/kibana

helm repo add elastic https://helm.elastic.co
helm fetch elastic/kibana --version 7.8.1

测试pod,cirros镜像可作为一个测试镜像,该镜像中集成了常用的命令行工具:

$kubectl run cirror-$RANDOM --rm -it --image=cirros -- /bin/sh

kubernetes快速入门17-Helm

上一篇:Penetration Test - Select Your Attacks(5)


下一篇:Web开发(F12调试)