使用kuberbuilder创建工程示例

原文连接:https://blog.csdn.net/u012986012/article/details/119710511

kubebuilder是一个官方提供快速实现Operator的工具包,可快速生成k8s的CRD、Controller、Webhook,用户只需要实现业务逻辑。

类似工具还有operader-sdk,目前正在与Kubebuilder融合

kubebuilder封装了controller-runtimecontroller-tools,通过controller-gen来生产代码,简化了用户创建Operator的步骤。

一般创建Operator流程如下:

  1. 创建工作目录,初始化项目
  2. 创建API,填充字段
  3. 创建Controller,编写核心协调逻辑(Reconcile)
  4. 创建Webhook,实现接口,可选
  5. 验证测试
  6. 发布到集群中

示例

我们准备创建一个2048的游戏,对外可以提供服务,也能方便地扩缩容。

准备环境

首先你需要有Kubernetes、Docker、Golang相关环境。
Linux下安装kubebuilder

curl -L -o kubebuilder https://go.kubebuilder.io/dl/latest/$(go env GOOS)/$(go env GOARCH)
chmod +x kubebuilder && mv kubebuilder /usr/local/bin/

 
 

    Create Resource [y/n]
    y #生成CR
    Create Controller [y/n]
    y #生成Controller

    目录结构如下:

    ├── api
    │   └── v1 # CRD定义
    ├── bin
    │   └── controller-gen
    ├── config
    │   ├── crd # crd配置
    │   ├── default
    │   ├── manager # operator部署文件
    │   ├── prometheus
    │   ├── rbac
    │   └── samples # cr示例
    ├── controllers
    │   ├── game_controller.go # controller逻辑
    │   └── suite_test.go
    ├── Dockerfile
    ├── go.mod
    ├── go.sum
    ├── hack
    │   └── boilerplate.go.txt # 头文件模板
    ├── main.go # 项目主函数
    ├── Makefile 
    └── PROJECT #项目元数据
    

    编写API

    mygame/api/v1/game_types.go定义我们需要的字段

    Spec配置如下

    type GameSpec struct {
    	// Number of desired pods. This is a pointer to distinguish between explicit
    	// zero and not specified. Defaults to 1.
    	// +optional
    	//+kubebuilder:default:=1 
    	//+kubebuilder:validation:Minimum:=1
    	Replicas *int32 `json:"replicas,omitempty" protobuf:"varint,1,opt,name=replicas"`
    	// Docker image name
    	// +optional
    	Image string `json:"image,omitempty"`
    	// Ingress Host name
    	Host string `json:"host,omitempty"`
    }
    

    kubebuilder:default可以设置默认值

    Status定义如下

    const (
    	Running  = "Running"
    	Pending  = "Pending"
    	NotReady = "NotReady"
    	Failed   = "Failed"
    )
    type GameStatus struct {
    	// Phase is the phase of guestbook
    	Phase string `json:"phase,omitempty"`
    	// replicas is the number of Pods created by the StatefulSet controller.
    	Replicas int32 `json:"replicas"`
    	// readyReplicas is the number of Pods created by the StatefulSet controller that have a Ready Condition.
    	ReadyReplicas int32 `json:"readyReplicas"`
    	// LabelSelector is label selectors for query over pods that should match the replica count used by HPA.
    	LabelSelector string `json:"labelSelector,omitempty"`
    }
    

    另外需要添加scale接口

    //+kubebuilder:subresource:scale:specpath=.spec.replicas,statuspath=.status.replicas,selectorpath=.status.labelSelector
    
     
     

      编写Controller逻辑

      Controller的核心逻辑在Reconcile中,我们只需要填充自己的业务逻辑

      func (r *GameReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
      	logger := log.FromContext(ctx)
      	logger.Info("revice reconcile event", "name", req.String())
      	// 获取game对象
      	game := &myappv1.Game{}
      	if err := r.Get(ctx, req.NamespacedName, game); err != nil {
      		return ctrl.Result{}, client.IgnoreNotFound(err)
      	}
        // 如果处在删除中直接跳过
      	if game.DeletionTimestamp != nil {
      		logger.Info("game in deleting", "name", req.String())
      		return ctrl.Result{}, nil
      	}
        // 同步资源,如果资源不存在创建deployment、ingress、service,并更新status
      	if err := r.syncGame(ctx, game); err != nil {
      		logger.Error(err, "failed to sync game", "name", req.String())
      		return ctrl.Result{}, nil
      	}
      	return ctrl.Result{}, nil
      }
      

      添加rbac配置

      //+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
      //+kubebuilder:rbac:groups=apps,resources=deployments/status,verbs=get;update;patch
      //+kubebuilder:rbac:groups=core,resources=services,verbs=get;list;watch;create;update;patch;delete
      //+kubebuilder:rbac:groups=networking,resources=ingresses,verbs=get;list;watch;create;update;patch;delete
      

      具体syncGame逻辑如下

      func (r *GameReconciler) syncGame(ctx context.Context, obj *myappv1.Game) error {
      	logger := log.FromContext(ctx)
      	game := obj.DeepCopy()
      	name := types.NamespacedName{
      		Namespace: game.Namespace,
      		Name:      game.Name,
      	}
        // 构造owner
      	owner := []metav1.OwnerReference{
      		{
      			APIVersion:         game.APIVersion,
      			Kind:               game.Kind,
      			Name:               game.Name,
      			Controller:         pointer.BoolPtr(true),
      			BlockOwnerDeletion: pointer.BoolPtr(true),
      			UID:                game.UID,
      		},
      	}
      	labels := game.Labels
      	labels[gameLabelName] = game.Name
      	meta := metav1.ObjectMeta{
      		Name:            game.Name,
      		Namespace:       game.Namespace,
      		Labels:          labels,
      		OwnerReferences: owner,
      	}
        // 获取对应deployment, 如不存在则创建
      	deploy := &appsv1.Deployment{}
      	if err := r.Get(ctx, name, deploy); err != nil {
      		if !errors.IsNotFound(err) {
      			return err
      		}
      		deploy = &appsv1.Deployment{
      			ObjectMeta: meta,
      			Spec:       getDeploymentSpec(game, labels),
      		}
      		if err := r.Create(ctx, deploy); err != nil {
      			return err
      		}
      		logger.Info("create deployment success", "name", name.String())
      	} else {
          // 如果存在对比和game生成的deployment是否一致,不一致则更新
      		want := getDeploymentSpec(game, labels)
      		get := getSpecFromDeployment(deploy)
      		if !reflect.DeepEqual(want, get) {
      			deploy = &appsv1.Deployment{
      				ObjectMeta: meta,
      				Spec:       want,
      			}
      			if err := r.Update(ctx, deploy); err != nil {
      				return err
      			}
      			logger.Info("update deployment success", "name", name.String())
      		}
      	}
        //service创建
      	svc := &corev1.Service{}
      	if err := r.Get(ctx, name, svc); err != nil {
      	  ...
      	}
        // ingress创建
      	ing := &networkingv1.Ingress{}
      	if err := r.Get(ctx, name, ing); err != nil {
      		...
      	}
      	newStatus := myappv1.GameStatus{
      		Replicas:      *game.Spec.Replicas,
      		ReadyReplicas: deploy.Status.ReadyReplicas,
      	}
      	if newStatus.Replicas == newStatus.ReadyReplicas {
      		newStatus.Phase = myappv1.Running
      	} else {
      		newStatus.Phase = myappv1.NotReady
      	}
        // 更新状态
      	if !reflect.DeepEqual(game.Status, newStatus) {
      		game.Status = newStatus
      		logger.Info("update game status", "name", name.String())
      		return r.Client.Status().Update(ctx, game)
      	}
      	return nil
      }
      

      默认情况下生成的controller只监听自定义资源,在示例中我们也需要监听game的子资源,如监听deployment是否符合预期

      // SetupWithManager sets up the controller with the Manager.
      func (r *GameReconciler) SetupWithManager(mgr ctrl.Manager) error {
      	// 创建controller
      	c, err := controller.New("game-controller", mgr, controller.Options{
      		Reconciler:              r,
      		MaxConcurrentReconciles: 3, //controller运行的worker数
      	})
      	if err != nil {
      		return err
      	}
      	//监听自定义资源
      	if err := c.Watch(&source.Kind{Type: &myappv1.Game{}}, &handler.EnqueueRequestForObject{}); err != nil {
      		return err
      	}
      	//监听deployment,将owner信息即game namespace/name添加到队列
      	if err := c.Watch(&source.Kind{Type: &appsv1.Deployment{}}, &handler.EnqueueRequestForOwner{
      		OwnerType:    &myappv1.Game{},
      		IsController: true,
      	}); err != nil {
      		return err
      	}
      	return nil
      }
      

      部署验证

      安装CRD

      make install
      
       
       

        # 查看deploy
        kubectl get deploy game-sample
        NAME READY UP-TO-DATE AVAILABLE AGE
        game-sample 1/1 1 1 6m

        # 查看ingress
        kubectl get ing game-sample
        NAME CLASS HOSTS ADDRESS PORTS AGE
        game-sample <none> mygame.io 192.168.49.2 80 7m

        验证应用,在/etc/host中添加<Ingress ADDRESS Ip> mygame.io,访问浏览器如下图所示
        使用kuberbuilder创建工程示例

        验证扩容

        kubectl scale games.myapp.qingwave.github.io game-sample --replicas 2
        game.myapp.qingwave.github.io/game-sample scaled
        

        # 扩容后
        kubectl get games.myapp.qingwave.github.io
        NAME PHASE HOST DESIRED CURRENT READY AGE
        game-sample Running mygame.io 2 2 2 7m

          同样的通过ValidateCreateValidateUpdate可实现validating webhook

          func (r *Game) ValidateCreate() error {
          	gamelog.Info("validate create", "name", r.Name)
          	// Host不能包括通配符
          	if strings.Contains(r.Spec.Host, "*") {
          		return errors.New("host should not contain *")
          	}
          	return nil
          }
          

          本地验证webhook需要配置证书,在集群中测试更方便点,可参考官方文档。

          总结

          至此,我们已经实现了一个功能完全的game-operator,可以管理game资源的生命周期,创建/更新game时会自动创建deployment、service、ingress等资源,当deployment被误删或者误修改时也可以自动回复到期望状态,也实现了scale接口。

          通过kubebuiler大大简化了开发operator的成本,我们只需要关心业务逻辑即可,不需要再手动去创建client/controller等,但同时kubebuilder生成的代码中屏蔽了很多细节,比如controller的最大worker数、同步时间、队列类型等参数设置,只有了解operator的原理才更好应用于生产。

          引用

          [1] https://book.kubebuilder.io/

          上一篇:Unity scene & game 相机管理学习


          下一篇:11 实现单台和全站HTTPS