Skip to main content

2 posts tagged with "KubeVela"

View All Tags

· 4 min read

What is KCL

KCL is an open-source, constraint-based record and functional language that enhances the writing of complex configurations, including those for cloud-native scenarios. It is hosted by the Cloud Native Computing Foundation (CNCF) as a Sandbox Project. With advanced programming language technology and practices, KCL is dedicated to promoting better modularity, scalability, and stability for configurations. It enables simpler logic writing and offers ease of automation APIs and integration with homegrown systems.

Several Ways to Deploy KCL Configuration to a cluster

cloud-native-tool-integration

Since KCL can output YAML/JSON files, theoretically, any method that supports deploying YAML/JSON configurations to a cluster can be used to deploy KCL configurations. Usually, KCL files are stored in Git or Module Registry for easy sharing among different roles and teams. However, KCL can do much more than that, and the main ways to deploy KCL configurations to a cluster are as follows.

  • Using kubectl: The most basic way to access a Kubernetes cluster is using Kubectl. We can directly deploy the Kubernetes YAML configuration files generated by KCL to the cluster using the kubectl apply command. This method is simple and suitable for deploying a small number of resources.
  • Using CI/CD tools: CI/CD tools (such as Jenkins, GitLab CI, CircleCI, ArgoCD, FluxCD, etc.) can be used to achieve GitOps automation deployment of Kubernetes YAML configuration files to the cluster. By defining CI/CD processes and configuration files, automated building and deployment to the cluster can be achieved.
  • Using tools that support KRM Function specification: Kubernetes Resource Model (KRM) Function allows users to use other languages, including KCL, to enhance YAML template and logic writing capabilities, such as writing conditions, loops, etc. These tools mainly include Kustomize, KPT, Crossplane, etc. Although Helm does not natively support KRM Function, we can combine Helm and Kustomize to achieve it.
  • Using client/runtime custom abstract configuration tools for deployment: KusionStack, KubeVela, etc. Of course, KCL allows you to customize your preferred application configuration model.
  • Using KCL Operator with Kubernetes Mutation Webhook and Validation Webhook support for runtime configuration or policy writing.
  • Using configuration management tools: Combine configuration management tools (such as Puppet, Chef, Ansible, etc.) to automate the deployment of Kubernetes YAML configurations to the cluster. These tools can achieve dynamic configuration deployment by defining KCL templates and variables.

The reasons for KCL supporting multiple deployment methods and cloud-native tool integration are as follows:

  • Flexibility: Different deployment methods are suitable for different scenarios and needs, so providing multiple choices allows users to choose the most suitable way to deploy applications or configurations according to their specific situations.
  • Cloud-native tool ecosystem: Kubernetes is a widely used platform with a large ecosystem of tools and technologies. Supporting multiple deployment methods can provide users with more choices to meet their usage habits and technological preferences.
  • Specifications and standards: The Kubernetes community is working to promote standards and specifications, such as OAM, KRM Function specifications, and Helm Charts. By providing multiple support methods through a unified KRM KCL specification and KCL Module, different specification and standard requirements can be met.
  • Automation and integration: Some deployment methods can be integrated through automation tools and CI/CD pipelines to achieve automated deployment processes. Therefore, providing multiple ways can meet different automation and integration needs.

In conclusion, supporting multiple deployment methods can provide users with greater flexibility and choice, allowing them to deploy applications or configurations according to their needs and preferences. The specific usage of each deployment method is as follows:

Using Kubectl

https://kcl-lang.io/blog/2023-11-20-search-k8s-module-on-artifacthub

Using CI/CD Tools

https://kcl-lang.io/blog/2023-07-31-kcl-github-argocd-gitops

Using KRM Function

https://kcl-lang.io/blog/2023-10-23-cloud-native-supply-chain-krm-kcl-spec

Using Custom Abstract Configuration Tools

https://kcl-lang.io/blog/2023-12-15-kubevela-integration

Using KCL Operator

https://kcl-lang.io/docs/user_docs/guides/working-with-k8s/mutate-manifests/kcl-operator

Using Configuration Management Tools

https://github.com/kcl-lang/kcl/issues/952

· 5 min read

cover

Introduction

KubeVela is a modern application delivery system hosted by the CNCF Foundation. It is built on the Open Application Model (OAM) specification and aims to abstract the complexity of Kubernetes, providing a set of simple and easy-to-use command-line tools and APIs for developers to deploy and operate cloud-native applications without worrying about the underlying details.

KCL is a configuration and policy language for cloud-native scenarios, hosted by the CNCF Foundation. It aims to improve the writing of complex configurations, such as cloud-native Kubernetes configurations, using mature programming language techniques and practices. KCL focuses on building better modularity, scalability, and stability around configuration, as well as easier logic writing, automation, and integration with the toolchain.

KCL exists in a completely open cloud-native world and is not tied to any orchestration/engine tools or Kubernetes controllers. It can provide API abstraction, composition, and validation capabilities for both Kubernetes clients and runtime.

Users can choose suitable cloud-native tools such as Kubectl, Helm, Kustomize, KPT, KusionStack, KubeVela, Helmfile, Crossplane, or ArgoCD to combine with KCL and apply configurations to the cluster based on their specific scenarios.

integration

This blog is the first in a series that explores the efficient deployment and operation of cloud-native applications using KCL and KubeVela together. We will share more advanced usage in future articles, so stay tuned.

Using KCL with KubeVela

Using KCL with KubeVela has the following benefits:

  • Simpler configuration: KCL provides stronger templating capabilities, such as conditions and loops, for KubeVela OAM configurations at the client level, reducing the need for repetitive YAML writing. At the same time, the reuse of KCL model libraries and toolchains enhances the experience and management efficiency of configuration and policy writing.
  • Better maintainability: KCL provides a configuration file structure that is more conducive to version control and team collaboration, instead of relying solely on YAML. When combined with OAM application models written in KCL, application configurations become easier to maintain and iterate.
  • Simplified operations: By combining the simplicity of KCL configurations with the ease of use of KubeVela, daily operational tasks such as deploying, updating, scaling, or rolling back applications can be simplified. Developers can focus more on the applications themselves rather than the tedious details of the deployment process.
  • Improved cross-team collaboration: By using KCL's configuration chunk writing and package management capabilities in conjunction with KubeVela, clearer boundaries can be defined, allowing different teams (such as development, testing, and operations teams) to collaborate systematically. Each team can focus on tasks within their scope of responsibility, delivering, sharing, and reusing their own configurations without worrying about other aspects.

Workflow

workflow

In this example, we use the KCL Playground application (written in Go and HTML5) as an example and use KCL to define the OAM configuration that needs to be deployed. The overall workflow is as follows:

  • Application code development produces a Docker image.
  • Write OAM configurations using KCL.
  • Deploy configurations using KubeVela.
  • Verify the running status of the application.

Specific Steps

0. Prerequisites

  • Familiarize yourself with basic Unix/Linux commands.
  • Familiarize yourself with using Git.
  • Understand the basics of Kubernetes.
  • Understand KubeVela.
  • Understand the basics of KCL.

1. Configure the Kubernetes Cluster

Install K3d and create a cluster.

k3d cluster create

Note: You can use other methods to create your own Kubernetes cluster, such as kind, minikube, etc., in this scenario.

2. Install KubeVela

  • Install the KubeVela CLI.
curl -fsSl https://kubevela.net/script/install.sh | bash
  • Install KubeVela Core.
vela install

3. Write OAM Configurations

  • Install KCL.
curl -fsSL https://kcl-lang.io/script/install-cli.sh | /bin/bash
  • Create a new project and add OAM dependencies.
kcl mod init kcl-play-svc && cd kcl-play-svc && kcl mod add oam
  • Write the following code in main.k.
import oam

oam.Application {
metadata.name = "kcl-play-svc"
spec.components = [{
name = metadata.name
type = "webservice"
properties = {
image = "kcllang/kcl:v0.9.0"
ports = [{port = 80, expose = True}]
cmd = ["kcl", "play"]
}
}]
}

Note: You can see documents here: https://artifacthub.io/packages/kcl/kcl-module/oam or in the IDE extension.

oam-definition-hover

4. Deploy the application and verify.

  • Apply the configuration.
kcl run | vela up -f -
  • Port forward the service.
vela port-forward kcl-play-svc

Then we can see the KCL Playground application running successfully in the browser.

kcl-play-svc

Conclusion

Through this guide, we have learned how to deploy cloud-native applications using KubeVela and KCL. In future blogs, we will explain how to further extend the capabilities of KubeVela by using KCL on the client side such as

  • Using the inheritance, composition, and validation capabilities of KCL to extend the OAM model and define application abstractions that are better suited to your infrastructure or organization.
  • Using the modularized configuration capabilities of KCL to organize OAM multi-environment configurations with conditions, logic, loops, and modularity. For example, distribute longer App Definitions into different files to reduce boilerplate configurations.
  • Further integration with projects like KusionStack and ArgoCD to achieve better GitOps.
  • Incorporate more cloud-native capabilities or Kubernetes Operators such as KubeBlocks and Crossplane to improve database management and provide programmable access to unified cloud APIs and Kubernetes APIs.
  • And many other use cases...