Zero to JupyterHub with Kubernetes @aliyun

序言

JupyterHub是一个可以支持多个客户同时在线的Jypter管理平台

JupyterHub的目标

• A cloud provider such as Google Cloud, Microsoft Azure, Amazon EC2, IBM Cloud,Alibaba Cloud…
• Kubernetes to manage resources on the cloud
• Helm to configure and control the packaged JupyterHub installation
• JupyterHub to give users access to a Jupyter computing environment
• A terminal interface on some operating system

引子

JupyterHub并没有提供alibaba的指导,因此本文补充在JupyterHub在aliyun 容器服务上从零开始的步骤。
此外在GPU的使用上通过GPU共享方案--CGPU来提高GPU的利用率。

操作步骤

创建kubernetes集群

  1. 创建容器服务集群
  2. 添加GPU节点
  3. 设置GPU节点为共享模式,参考《尝鲜阿里云容器服务Kubernetes 1.16,拥抱GPU新姿势》

安装JupyterHub

安装Helm

阿里云容器服务目前默认支持的Helm版本为v3。
https://github.com/helm/helm/releases/ 页面找到最新的版本,下载helm文件,并添加到对应的path路径中。

安装JupyterHub

  1. Generate a random hex string representing 32 bytes to use as a security token. Run this command in a terminal and copy the output:
openssl rand -hex 32
  1. 创建PVC和StorageClass信息
    创建Nas存储,并建立挂载点。参考链接 https://help.aliyun.com/document_detail/144398.html?spm=a2c4g.11186623.6.757.71aa2b8djPh6fE

如下文件存储为storage.yaml 文件,并通过 `kubectl apply -f storage.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: alicloud-nas-subpath
mountOptions:
- nolock,tcp,noresvport
- vers=3
parameters:
  volumeAs: subpath
  server: "0994fd65-66f5.cn-zhangjiakou.extreme.nas.aliyuncs.com:/share"   #需要放自己的nas的挂载点,操作参考 https://help.aliyun.com/document_detail/144398.html?spm=a2c4g.11186623.6.757.71aa2b8djPh6fE
provisioner: nasplugin.csi.alibabacloud.com
reclaimPolicy: Retain
#---
#kind: PersistentVolumeClaim
#apiVersion: v1
#metadata:
#  name: hub-db-dir
#  namespace: jhub
#spec:
#  accessModes:
#    - ReadWriteMany
#  storageClassName: alicloud-nas-subpath
#  resources:
#    requests:
#      storage: 1Gi
  1. 修改原始config文件
  2. 修改config.yaml文件中各个镜像的路径,避免拉取google仓库的镜像而造成的失败
  3. 因为当前使用的版本0.8.2不能和k8s1.16正确匹配,在install脚本中添加了对应的path 脚本参考 https://github.com/jupyterhub/kubespawner/issues/354
  4. 修改pod申请gpu资源的配置,设置CGPU的模式 aliyun.com/gpu-mem: 4 https://zero-to-jupyterhub.readthedocs.io/en/latest/customizing/user-resources.html

     以上修改已经在下面的config.yaml文件中完成了修改。 注意proxy.secretToken必须换为第一步生成的信息
    
proxy:
  secretToken: "d8d198d787f22869e67df3ad3ac5f4d99a843c20243f9ed785f77822fb4ce517" ## 该token选用自己在步骤1中生成的即可
prePuller:
  continuous:
    enabled: false
  extraImages: {}
  hook:
    enabled: true
    image:
      name: jupyterhub/k8s-image-awaiter
      tag: 0.8.2
  pause:
    image:
      # 替换默认的google镜像
      name: registry.cn-zhangjiakou.aliyuncs.com/kubernetesmirror/pause 
      tag: "3.1"
hub:
  #image:
  #  name: jupyterhub/k8s-hub
  #  tag: 0.9.0-beta.3
  #  0.8.2  # 尝试提高该版本来与kubernetes 1.16匹配,待验证https://github.com/jupyterhub/kubespawner/issues/354
  db:
    password: null
    pvc:
      accessModes:
      - ReadWriteOnce
      annotations: {}
      selector: {}
      storage: 1Gi
      storageClassName: alicloud-nas-subpath
singleuser:
  storage:
    capacity: 2Gi
    dynamic:
      pvcNameTemplate: claim-{username}{servername}
      storageAccessModes:
      - ReadWriteOnce
      storageClass: alicloud-nas-subpath
      volumeNameTemplate: volume-{username}{servername}
  profileList:
    - display_name: "CGPU Server" ## 共享GPU使用模式
      description: "Spawns a notebook server with access to a CGPU"
      kubespawner_override:
        extra_resource_limits:
          aliyun.com/gpu-mem: 2
    - display_name: "GPU Server"  ## 普通GPU使用模式
      description: "Spawns a notebook server with access to a GPU"
      kubespawner_override:
        extra_resource_limits:
          nvidia.com/gpu: 1
  image:
    #name: jupyterhub/k8s-singleuser-sample
    #name: tensorflow/tensorflow
    pullPolicy: IfNotPresent
    # 替换为自己需要的镜像即可,本例中使用的是支持TensorFlow的镜像,可以从官方网站上去找合适的镜像
    name: registry.cn-hangzhou.aliyuncs.com/kubeflow-images-public/tensorflow-notebook
    tag: 1.15.2
  1. 执行安装脚本
    将如下内存存储为install.sh 文件,并将步骤3存储的config.yaml 文件放在同一个目录下,执行安装命令

bash sh ./install.sh

# Suggested values: advanced users of Kubernetes and Helm should feel
# free to use different values.
RELEASE=jhub
NAMESPACE=jhub
helm upgrade --install $RELEASE jupyterhub/jupyterhub \
  --namespace $NAMESPACE  \
  --version=0.8.2 \
  --values config.yaml -v 6
sleep 30
export NAMESPACE=jhub
kubectl patch deploy -n $NAMESPACE hub --type json --patch '[{"op": "replace", "path": "/spec/template/spec/containers/0/command", "value": ["bash", "-c", "\nmkdir -p ~/hotfix\ncp -r /usr/local/lib/python3.6/dist-packages/kubespawner ~/hotfix\nls -R ~/hotfix\npatch ~/hotfix/kubespawner/spawner.py << EOT\n72c72\n<             key=lambda x: x.last_timestamp,\n---\n>             key=lambda x: x.last_timestamp and x.last_timestamp.timestamp() or 0.,\nEOT\n\nPYTHONPATH=$HOME/hotfix jupyterhub --config /srv/jupyterhub_config.py --upgrade-db\n"]}]'

注意:“kubectl patch deploy...” 该命令执行是在安装helm后30秒后执行,注意耐心等待,不要提前终止

验证与使用

  1. 验证pod和svc的状态均为正常
jumper(⎈ |zjk-gpu:jhub)➜  ~ k get pod -n jhub
NAME                     READY   STATUS    RESTARTS   AGE
hub-86d7754c55-jnsd8     1/1     Running   0          10h
proxy-657b654c85-htl62   1/1     Running   0          10h
jumper(⎈ |zjk-gpu:jhub)➜  ~ k get svc -n jhub
NAME           TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
hub            ClusterIP      172.21.10.208   <none>        8081/TCP                     10h
proxy-api      ClusterIP      172.21.13.171   <none>        8001/TCP                     10h
proxy-public   LoadBalancer   172.21.15.40    47.92.24.78   80:30889/TCP,443:31823/TCP   10h
  1. 根据通过svc proxy-public 对应的公网IP访问网站,由于没有设置用户密码,可以随意设置
    Zero to JupyterHub with Kubernetes @aliyun
  2. 如下图是在config.yaml文件中配置的多个profile,分别是申请普通的GPU资源,以及共享型的GPU资源
    Zero to JupyterHub with Kubernetes @aliyun
  3. 如下图,正在创建正在使用的pod
    Zero to JupyterHub with Kubernetes @aliyun
  4. 查看命名空间 jhub下的pod的情况,有一个jupyter-${用户名}的pod生成
jumper(⎈ |zjk-gpu:jhub)➜  54_cgpu_demo git:(master) ✗ k get pod
NAME                     READY   STATUS    RESTARTS   AGE
hub-5ff8cff85f-nmhfl     1/1     Running   0          46h
jupyter-lilong           1/1     Running   0          17s
proxy-657b654c85-8mn6t   1/1     Running   0          46h
  1. 至此,环境创建成功。

问题记录

搭建过程中的问题与记录

问题1 NoneType

错误信息
[E 2020-05-30 00:24:14.373 JupyterHub base:1011] Preventing implicit spawn for a because last spawn failed: '<' not supported between instances of 'NoneType' and 'NoneType'
说明: 已知问题,参考 https://github.com/jupyterhub/kubespawner/issues/354
上面在安装helm chart之后的“kubectl patch deploy...”就是修复该问题

问题2 存储

不管是hub的创建,还是每个客户运行自己环境创建的pod均需要使用到存储,如果pod启动pending,且kubectl describe pod * 里面显示存储资源不足,可以参考nas相关的存储的设置。 https://help.aliyun.com/document_detail/144398.html?spm=a2c4g.11186623.6.757.71aa2b8djPh6fE

参考

  1. NAS动态存储卷 https://help.aliyun.com/document_detail/144398.html
  2. JupyterHub在k8s上的安装 https://zero-to-jupyterhub.readthedocs.io/en/latest/
  3. 自定义环境 https://zero-to-jupyterhub.readthedocs.io/en/latest/customizing/user-environment.html
上一篇:gnosis of shell 深入理解shell编程


下一篇:尝鲜阿里云容器服务Kubernetes 1.16,共享TensorFlow实验室《二》--共享GPU的弹性