Skip to main content

· 15 min read

Introduction

The KCL team is pleased to announce that KCL v0.6.0 is now available! This release has brought three key updates to everyone: Language, Tools, and Integrations.

  • Use KCL language, tools and IDE extensions with more complete features and fewer errors to improve code writing experience and efficiency.
  • Use KPM, OCI Registry and other tools to directly use and share your cloud native domain models, reducing learning and hands-on costs.
  • Use cloud-native integration extensions such as Helmfile KCL plugin and KCL Operator to simultaneously support in-place mutation and validation of Kubernetes resources on both the client and runtime, avoiding hardcoded configurations.

You can visit the KCL release page or the KCL website to get KCL binary download link and more detailed release information.

KCL is an open-source, constraint-based record and functional language. KCL improves the writing of numerous complex configurations, such as cloud-native scenarios, through its mature programming language technology and practice. It is dedicated to building better modularity, scalability, and stability around configurations, simpler logic writing, faster automation, and great built-in or API-driven integrations.

This blog will introduce the content of KCL v0.6.0 and recent developments in the KCL community to readers.

Language

🔧 Type system enhancement

Support automatic type inference for KCL configuration block attributes. Prior to version 0.6.0 of KCL, the key1 and key2 attributes in the code snippet below would be inferred as type str | int. With the updated version, we have further improved the precision of type inference for configuration attributes, so key1 and key2 will have more specific and precise corresponding types.

config = {
key1 = "value1"
key2 = 2
}
key1 = config.key1 # The type of key1 is str
key2 = config.key2 # The type of key2 is int

In addition, we have optimized error messages for schema semantic checking, union type checking, and type checking errors in system library functions.

For more information, please refer to here

🏄 API Update

KCL Schema model parsing: The GetSchemaType API is used to retrieve KCL package-related information and default values for schema attributes.

🐞 Bug Fixes

Fix schema required/optional attribute check in KCL

In previous versions of KCL, the required/optional attribute check for KCL would miss nested schema attribute checks. In version KCL v0.6.0, we have fixed such similar issues.

schema S:
a: int
b: str
schema L:
# In previous versions, the required attribute check for attributes 'a' and 'b' of S in [S] and {str:S} would be missed.
# This issue has been fixed in KCL v0.6.0 and later versions.
ss?: [S]
sss?: {str:S}
l = L {
ss = [S {b = "b"}]
}

For more information, please see here

IDE & Toolchain Updates

IDE Updates

Features

  • Performance improvement for all IDE features
  • Support variable and schema attribute completion for KCL packages
  • Support KCL schema attribute document attribute hover
  • Support for quick fix of useless import statements

ide-quick-fix

  • Support right-click formatting of files and code fragments in VS Code.

ide-format

  • Support hover for built-in functions and function information in system libraries

ide-func-hover

IDE Extension Updates

We have integrated the KCL language server LSP into NeoVim and Idea, enabling the completion, navigation, and hover features supported by VS Code IDE in NeoVim and IntelliJ IDEA.

  • NeoVim KCL Extension

kcl.nvim

  • IntelliJ Extension

intellij

For more information on downloading, installation, and features of the IDE plugins, please refer to:

KCL Formatting Tool Updates

Support formatting of configuration blocks with incorrect indentation

  • Before formatting
config = {
a ={
x = 1
y =2
}
b = {
x = 1
y = 2
}
}
  • After formatting
config = {
a = {
x = 1
y = 2
}
b = {
x = 1
y = 2
}
}

KCL Documentation Tool Updates

  • Support for exporting Markdown documents
  • Support for exporting document index pages
  • Support for exporting documents with custom style templates
  • Support for HTML escaping in exported documents
  • Enhanced document generation to parse and render example code snippets in document comments
  • By tracking model updates in Github workflow and regenerating the documentation, automatic synchronization of the documentation can be achieved. Please refer to here for more details.

Generate model document from kpm package

  1. Create a kpm package and add documentation comments (using docstring) to the Service model. The documentation can include explanations, example code, and usage instructions to help other developers quickly get started and use it correctly.

➜ kpm init demo

➜ cat > demo/server.k << EOF
schema Service:
"""
Service is a kind of workload profile that describes how to run your application code. This
is typically used for long-running web applications that should "never" go down, and handle
short-lived latency-sensitive web requests, or events.

Attributes
----------
workloadType : str = "Deployment" | "StatefulSet", default is Deployment, required.
workloadType represents the type of workload used by this Service. Currently, it supports several
types, including Deployment and StatefulSet.
image : str, default is Undefined, required.
Image refers to the Docker image name to run for this container.
More info: https://kubernetes.io/docs/concepts/containers/images
replicas : int, default is 2, required.
Number of container replicas based on this configuration that should be ran.

Examples
--------
# Instantiate a long-running service and its image is "nginx:v1"

svc = Service {
workloadType: "Deployment"
image: "nginx:v1"
replica: 2
}
"""
workloadType: "Deployment" | "StatefulSet"
image: str
replica: int
EOF

  1. Generate the package documentation in Markdown format

The following command will output the demo package documentation to the doc/ directory in the current working directory:

kcl-go doc generate --file-path demo

docgen

For more usage details, please use kcl-go doc generate -h to refer to the help information.

Automatic synchronization of documents through CI pipelines

Implement automatic documentation synchronization through a pipeline By tracking model updates in a Github workflow and regenerating the documentation, automatic synchronization of the documentation can be achieved. You can refer to the approach in the Kusionstack/catalog repo to generate the documentation and automatically submit change PRs to the documentation repository.

By tracking model updates in Github workflow and regenerating documents, automatic document synchronization can be achieved. Can refer to the approach in the Kusionstack/catalog repo is to generate documents and automatically submit change PRs to the document repository.

KCL Import Tool Updates

The KCL Import Tool now adds support for converting Terraform Provider Schema to KCL Schema based on Protobuf, JsonSchema OpenAPI models, and Go Structures, such as the following Terraform Provider Json (obtained through the command terraform providers schema -json > provider.json , For more details, please refer to https://developer.hashicorp.com/terraform/cli/commands/providers/schema)

{
"format_version": "0.2",
"provider_schemas": {
"registry.terraform.io/aliyun/alicloud": {
"provider": {
"version": 0,
"block": {
"attributes": {},
"block_types": {},
"description_kind": "plain"
}
},
"resource_schemas": {
"alicloud_db_instance": {
"version": 0,
"block": {
"attributes": {
"db_instance_type": {
"type": "string",
"description_kind": "plain",
"computed": true
},
"engine": {
"type": "string",
"description_kind": "plain",
"required": true
},
"security_group_ids": {
"type": ["set", "string"],
"description_kind": "plain",
"optional": true,
"computed": true
},
"security_ips": {
"type": ["set", "string"],
"description_kind": "plain",
"optional": true,
"computed": true
},
"tags": {
"type": ["map", "string"],
"description_kind": "plain",
"optional": true
}
},
"block_types": {},
"description_kind": "plain"
}
},
"alicloud_config_rule": {
"version": 0,
"block": {
"attributes": {
"compliance": {
"type": [
"list",
[
"object",
{
"compliance_type": "string",
"count": "number"
}
]
],
"description_kind": "plain",
"computed": true
},
"resource_types_scope": {
"type": ["list", "string"],
"description_kind": "plain",
"optional": true,
"computed": true
}
}
}
}
},
"data_source_schemas": {}
}
}
}

Then the tool can output the following KCL code

"""
This file was generated by the KCL auto-gen tool. DO NOT EDIT.
Editing this file might prove futile when you re-run the KCL auto-gen generate command.
"""

schema AlicloudConfigRule:
"""
AlicloudConfigRule

Attributes
----------
compliance: [ComplianceObject], optional
resource_types_scope: [str], optional
"""

compliance?: [ComplianceObject]
resource_types_scope?: [str]

schema ComplianceObject:
"""
ComplianceObject

Attributes
----------
compliance_type: str, optional
count: int, optional
"""

compliance_type?: str
count?: int

schema AlicloudDbInstance:
"""
AlicloudDbInstance

Attributes
----------
db_instance_type: str, optional
engine: str, required
security_group_ids: [str], optional
security_ips: [str], optional
tags: {str:str}, optional
"""

db_instance_type?: str
engine: str
security_group_ids?: [str]
security_ips?: [str]
tags?: {str:str}

check:
isunique(security_group_ids)
isunique(security_ips)

Package Manage Tool Updates

kpm pull supports pulling packages by package name

kpm supports pulling the corresponding package by using the kpm pull <package_name>:<package_version> command.

Taking the k8s package as an example, you can directly download the package to your local machine using the following commands:

kpm pull k8s

or

kpm pull k8s:1.27

The package downloaded with kpm pull will be saved in the directory <execution_directory>/<oci_registry>/<package_name>. For example, if you use the default kpm registry and run the kpm pull k8s command, you can find the downloaded content in the directory <execution_directory>/ghcr.io/kcl-lang/k8s.

$ tree ghcr.io/kcl-lang/k8s -L 1

ghcr.io/kcl-lang/k8s
├── api
├── apiextensions_apiserver
├── apimachinery
├── kcl.mod
├── kcl.mod.lock
├── kube_aggregator
└── vendor

6 directories, 2 files

kpm supports adding local paths as dependencies

"Different projects have different KCL packages, and there are dependencies between them. However, they are stored in different directories. I hope that these packages stored in different directories can be managed together, rather than having to put them together for them to compile." If you also have this need, you can try this feature. The kpm add command currently supports adding local paths as dependencies to a KCL package. You just need to run the command kpm add <local_package_path>, and your local package will be added as a third-party library dependency to the current package.

kpm pull k8s

After completion, you can find the downloaded k8s package in the directory "the_directory_where_you_executed_the_command/ghcr.io/kcl-lang/k8s". Create a new KCL package using the kpm init mynginx command.

kpm init mynginx

Then, navigate into this package.

cd mynginx

Inside this package, you can use the kpm add command to add the k8s package you downloaded locally as a third-party library dependency to mynginx.

kpm add ../ghcr.io/kcl-lang/k8s/

Next, add the following content to main.k.

import k8s.api.core.v1 as k8core

k8core.Pod {
metadata.name = "web-app"
spec.containers = [{
name = "main-container"
image = "nginx"
ports = [{containerPort: 80}]
}]
}

Normal compilation can be performed through the kpm run command.

kpm run

kpm adds checking for existing package tags

We have added a check for duplicate tags in the kpm push command. In order to avoid situations where packages with the same tag have different content, we have added restrictions on the push function in the kpm. If the version of the kcl package you push already exists, you will not be able to push the current kcl package. You will receive the following information:

kpm: package 'my_package' will be pushed.
kpm: package version '0.1.0' already exists

Modifying the content of a package that has already been pushed to the registry without changing the tag carries a high risk, as the package may already be in use by others. Therefore, if you need to push your package, we recommend:

  • Change your tag and follow semantic versioning conventions.
  • If you must modify the content of a package without changing the tag, you will need to delete the existing tag from the registry.

Integrations

Helmfile KCL Plugin

Helmfile is a declarative specification and tool for deploying Helm Charts. With the Helmfile KCL plugin, you can:

  • Edit or verify Helm Chart through non-invasive hook methods, separating the data and logic parts of Kubernetes configuration management
    • Modify resource labels/annotations, inject sidecar container configuration
    • Use KCL schema to validate resources
    • Define your own abstract application models
  • Maintain multiple environment and tenant configurations elegantly, rather than simply copying and pasting.

Here is a detailed explanation using a simple example. With the Helmfile KCL plugin, you do not need to install any components related to KCL. You only need the latest version of the Helmfile tool on your local device.

We can write a helmfile.yaml file as follows:

repositories:
- name: prometheus-community
  url: https://prometheus-community.github.io/helm-charts

releases:
- name: prom-norbac-ubuntu
  namespace: prometheus
  chart: prometheus-community/prometheus
  set:
  - name: rbac.create
    value: false
  transformers:
  # Use KCL Plugin to mutate or validate Kubernetes manifests.
  - apiVersion: krm.kcl.dev/v1alpha1
    kind: KCLRun
    metadata:
      name: "set-annotation"
      annotations:
        config.kubernetes.io/function: |
          container:
            image: docker.io/kcllang/kustomize-kcl:v0.2.0
    spec:
      source: |
        [resource | {if resource.kind == "Deployment": metadata.annotations: {"managed-by" = "helmfile-kcl"}} for resource in option("resource_list").items]

In the above file, we referenced the Prometheus Helm Chart and injected the managed-by="helmfile-kcl" label into all deployment resources of Prometheus with just one line of KCL code. The following command can be used to deploy the above configuration to the Kubernetes cluster:

helmfile apply

For more use cases, please refer to https://github.com/kcl-lang/krm-kcl

KCL Operator

KCL Operator provides cluster integration, allowing you to use Access Webhook to generate, mutate, or validate resources based on KCL configuration when apply resources to the cluster. Webhook will capture creation, application, and editing operations, and execute KCLRun on the configuration associated with each operation, and the KCL programming language can be used to

  • Add labels or annotations based on a condition.
  • Inject a sidecar container in all KRM resources that contain a PodTemplate.
  • Validating all KRM resources using KCL Schema, such as constraints on starting containers only in a root mode.
  • Generating KRM resources using an abstract model or combining and using different KRM APIs.

Here is a simple resource annotation mutation example to introduce the usage of the KCL operator.

0. Prerequisites

Prepare a Kubernetes cluster like k3d the kubectl tool.

1. Install KCL Operator

kubectl apply -f https://raw.githubusercontent.com/kcl-lang/kcl-operator/main/config/all.yaml

Use the following command to observe and wait for the pod status to be Running.

kubectl get po

2. Deploy KCL Annotation Setting Model

kubectl apply -f- << EOF
apiVersion: krm.kcl.dev/v1alpha1
kind: KCLRun
metadata:
name: set-annotation
spec:
# Set dynamic parameters required for the annotation modification model, here we can add the labels we want to modify/add
params:
annotations:
managed-by: kcl-operator
# Reference the annotation modification model on OCI
source: oci://ghcr.io/kcl-lang/set-annotation
EOF

3. Deploy a Pod to Verify the Model Result

Execute the following command to deploy a Pod resource:

kubectl apply -f- << EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx
annotations:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
EOF
kubectl get po nginx -o yaml | grep kcl-operator

We can see the following output:

    managed-by: kcl-operator

We can see that the Nginx Pod automatically added the annotation managed-by=kcl-operator.

In addition, besides referencing an existing model for the source field of the KCLRun resource, we can directly set KCL code for the source field to achieve the same effect. For example:

apiVersion: krm.kcl.dev/v1alpha1
kind: KCLRun
metadata:
name: set-annotation
spec:
params:
annotations:
managed-by: kcl-operator
# Resource modification can be achieved with just one line of KCL code
source: |
items = [item | {metadata.annotations: option("params").annotations} for item in option("items")]

registry

We have provided more than 30 built-in models, and you can find more code examples in the following link: https://github.com/kcl-lang/krm-kcl/tree/main/examples

For example

  • Use the web-service model to directly instantiate the Kubernetes resources required for a web application
  • Add annotations to existing k8s resources using the set-annotation model
  • Use the https-only model to verify that your Ingress configuration can only be set to https, otherwise an error will be reported.

Vault Integration

In just three steps, we can use Vault to store and manage sensitive information and use it in KCL.

Firstly, we install and use Vault to store foo and bar information.

vault kv put secret/foo foo=foo
vault kv put secret/bar bar=bar

Then write the following KCL code (main.k)

apiVersion = "apps/v1"
kind = "Deployment"
metadata = {
name = "nginx"
labels.app = "nginx"
annotations: {
"secret-store": "vault"
# Valid format:
# "ref+vault://PATH/TO/KV_BACKEND#/KEY"
"foo": "ref+vault://secret/foo#/foo"
"bar": "ref+vault://secret/bar#/bar"
}
}
spec = {
replicas = 3
selector.matchLabels = metadata.labels
template.metadata.labels = metadata.labels
template.spec.containers = [
{
name = metadata.name
image = "${metadata.name}:1.14.2"
ports = [{ containerPort = 80 }]
}
]
}

Finally, the decrypted configuration can be obtained through the vals command-line tool

kcl main.k | vals eval -f -

For more details and use cases, please refer to here

GitLab CI Integration

Using KCL, we can not only use Github Action as CI for application publishing through GitOps, but also provide GitLab CI integration in this version. Please refer to: https://kcl-lang.io/docs/user_docs/guides/ci-integration/gitlab-ci

Other Updates and Bug Fixes

See here for more updates and bug fixes.

Documents

The versioning semantic option is added to the KCL website. Currently, v0.4.x, v0.5.x and v0.6.0 versions are supported.

Community

  • Thank @jakezhu9 for his contribution to converting KCL Import tools, including Terraform Provider Schema, JsonSchema, JSON, YAML, and other configuration formats/data to KCL schema/configurations 🙌
  • Thank @xxmao123 for her contribution to connecting KCL LSP language server to the Idea IDE extension 🙌
  • Thank @starkers for his contribution to the KCL NeoVim extension 🙌
  • Thank @starkers for adding KCL installation support to mason.nvim registry 🙌
  • Thank @Ekko for his contribution to the integration of KCL cloud native tools and KCL Operator 🙌
  • Thank @prahalaramji for the upgrade, update, and contribution to the KCL Homebrew installation script 🙌
  • Thank @yyxhero for providing assistance and support in the Helmfile KCL plugin support 🙌
  • Thank @nkabir, @mihaigalos, @prahalaramji, @yamin-oanda, @dhhopen, @magick93, @MirKml, @kolloch, @steeling, and others for their valuable feedback and discussion during the past two months of using KCL. 🙌

Additional Resources

For more information, see KCL FAQ.

Resources

Thank all KCL users for their valuable feedback and suggestions during this version release. For more resources, please refer to:

See the community for ways to join us. 👏👏👏

· 14 min read

Introduction

With the development of cloud-native technologies, we are increasingly shifting towards cloud infrastructure. Tools such as Kubernetes and Terraform have become increasingly popular for managing and deploying applications based on cloud APIs. However, along with this popularity, the complexity and cognitive burden associated with these tools have reached a peak in recent years.

cognitive-loading

The increasing complexity of Kubernetes and cloud APIs can be attributed to several factors:

  • Growing functionality and capabilities: Kubernetes and cloud APIs continuously evolve to meet the growing demands of applications and the development of cloud computing. To fulfill user requirements, new functionalities and capabilities such as auto-scaling, service discovery, load balancing, and permission management are constantly introduced. The introduction of these new features adds to the complexity of the system. Despite the availability of various automation methods, over time, the exponential growth in the number of different resource types, potential settings for these resource types, and the complex relationships between them pose challenges.
  • Complex configuration and management requirements: As applications scale, configuring and managing Kubernetes and cloud APIs becomes increasingly complex. This includes managing a large number of container instances and resources, configuring intricate networking and storage setups, ensuring high availability and load balancing, and repetitive configurations for different environments and topologies. These complex configuration and management requirements contribute to the overall complexity of Kubernetes and cloud APIs, sometimes leading to the emergence of "YAML engineers" as a joking reference.

Despite the increasing complexity of Kubernetes and cloud APIs, they provide powerful functionalities and flexibility to help organizations effectively manage and scale their applications. By utilizing appropriate tools, engineering practices, and methods, this complexity can be alleviated, enabling efficient utilization of these technologies to meet business needs. Dynamic configuration management techniques are one of the approaches that can help address the complexity of Kubernetes and cloud APIs to a certain extent.

What is Dynamic Configuration Management

We follow the original definition of Dynamic Configuration Management (DCM) by its creator Chris Stephenson and extend it to a certain extent.

Dynamic Configuration Management (DCM) is an approach to constructing the configuration of computational workloads. Developers create workload specifications that describe everything necessary for the workload to run successfully. These specifications are used to dynamically generate configurations for deploying the workload in specific environments. With DCM, developers do not need to define or maintain environment-specific configurations for their workloads.

Dynamic Configuration Management means that developers describe the relationship between their workloads and resources in an abstract and environment-agnostic manner. They describe this relationship in a format called a workload specification or application-centric specification. The specification is generic and works across environments, which means it doesn't provide enough information to configure the workloads and resources themselves. To obtain executable configurations, we need to apply the specification to configuration baselines (for application and infrastructure configurations) and generate them based on the deployment context. For developers, Dynamic Configuration Management requires providing an application-centric configuration interface that they can understand. For platform engineers, Dynamic Configuration Management can help define how to handle configuration resources and workload specifications, leading to improved consistency and compliance within the organization. It also makes it easier to provide a self-service Internal Developer Platform (IDP).

In contrast to dynamic configuration management, static configuration management brings about a series of issues:

  • Configuration Bloat: Most static configurations require separate configurations for each environment, and in the worst case, it can introduce difficult-to-debug errors involving cross-linking between environments, leading to poor stability and scalability.
  • Configuration Drift: Managing application and infrastructure configurations for different environments with static management approaches often lacks standardized methods for handling dynamic variations in configurations across environments. Employing non-standardized approaches can result in exponential complexity and lead to configuration drift.

Why Do We Need a New Paradigm

Just as music is recorded on musical sheets and time-series data is stored in time-series databases, a set of configuration and policy languages is used for writing and managing scalable and complex configurations and policies in the specific problem domain of platform engineering. Unlike high-level general-purpose languages with mixed paradigms and mixed engineering capabilities, these domain-specific languages are built upon a converged, finite syntax and semantic set to tackle the near-infinite variations and complexities of the domain's problems, consolidating the mindset and approach to writing complex configurations and policies into the language's features.

For cloud APIs, we can leverage Infrastructure as Code (IaC) tools like Terraform to obtain a vast library of pre-written module configurations. However, when it comes to Kubernetes, there is still a lack of lightweight client-side configuration composition and abstraction solutions. Existing solutions or specifications struggle to strike a balance between abstraction capabilities and scalability. In extreme scenarios, developers often resort to writing glue code and scripts to handle configuration processing logic, which hinders stability and efficiency. For instance, although tools like Helm Chart provide template programming, the programming experience and efficiency are subpar. Additionally, when users need to dynamically adjust dependencies on values.yaml parameters in Helm Charts or encounter upstream Helm Charts that don't support the required parameter settings, they often resort to forking and maintaining the upstream Helm Charts in an intrusive manner or supplementing with tools like Kustomize, incurring additional maintenance costs and complexities.

Therefore, we contemplate the need for a unified specification that can simultaneously support configuration language capabilities and, with minimal side effects, fulfill the requirements of stability, scalability, and efficiency in the context of scalable configuration scenarios. This specification should also address the problems inherent in static configuration management.

  • Abstraction and Composition Capabilities: By providing a programmable schema, the platform personnel can shield the underlying infrastructure details and platform, while offering developers a better and more stable API abstraction.
  • Stability: By providing stable features out-of-the-box, such as rule writing and static typing, the specification enables risk mitigation and early error detection in version control systems (VCS), allowing for auditing, traceability, and rollback, and facilitating automation.
  • Scalability: Similar to the application software supply chain, infrastructure configuration should be treated as part of the software supply chain. Users can distribute and reuse configurations in a standardized manner, easily accessing commonly used configurations in the open-source world. For internal platforms, we can easily write and extend configuration code.
  • High performance: Since dynamic configuration management advocates generating configurations for different environments and topologies based on a baseline configuration and deployment context, there is a high demand for the rendering performance of the configuration code itself. Developers typically do not want to spend several minutes waiting for the actual configuration output, as it would hinder software iteration and upgrade efficiency.

KRM KCL Specification

The KRM KCL specification is a configuration specification based on the Kubernetes Resource Model (KRM). KRM is a universal configuration model used to describe and manage various cloud-native resources such as containers, pods, and services. It provides a unified way to define and manage these resources, enabling portability and reuse across different environments. It exists within the open Kubernetes ecosystem, decoupled from any specific orchestration/engine tools or Kubernetes controllers. It allows platform personnel to extend their abstractions, configuration editing, and validation logic, while also enabling direct reuse of existing model abstractions from the community, such as the Open Application Model (OAM), which also complies with the KRM specification.

The KCL programming language is a core component of the KRM KCL specification. KCL is a declarative configuration language that allows users to describe the configuration requirements of their applications and associate them with underlying infrastructure. KCL has a rich syntax and semantics that can flexibly describe various configuration needs, such as environment variables, resource limits, dependencies, and more. KCL aims to hide the details of infrastructure and platforms by defining API abstractions, reducing the burden, and managing large-scale configurations across teams without side effects. It provides capabilities to write, combine, and abstract configuration blocks, such as structural definitions, constraints, and logic. In platform engineering practices, KCL is not just a key-value language but a domain-specific language tailored for the platform engineering field.

Another important feature of the KRM KCL specification is its support for dynamic configuration management. Traditional configuration management tools often rely on static configuration files that require manual modifications and deployments. The KRM KCL specification naturally provides a way to automatically modify configurations, which can be used standalone on the client-side or integrated with Kubernetes through the KCL Operator to achieve runtime configuration modifications without the need for developing Kubernetes webhooks or writing extensive configuration processing logic.

In addition to dynamic configuration management, the KRM KCL specification offers several other advantages. Firstly, it is based on Kubernetes and seamlessly integrates with the existing Kubernetes ecosystem. Secondly, the KRM KCL specification provides a rich set of tools and libraries that make it easy for developers to create, test, and maintain configurations. Lastly, the KRM KCL specification adopts open standards, allowing for interoperability with other configuration management tools such as Kubectl, Helm, Kustomize, and more. It possesses the following characteristics:

  • Declarative: Configurations are abstracted and organized in code form, allowing users to view and edit them using editors or IDEs. The code clearly describes various aspects of resources, services, networks, and more.
  • Final-state oriented: It is abstracted and implementation-agnostic, with highly abstracted and domain-specific underlying capabilities. The concrete business logic is written by users in a declarative manner. Users can avoid manual operations and non-uniform patterns introduced by private scripts through a unified description code and GitOps workflow, reducing security risks.
  • Stability: Any modification to configuration code can potentially result in unexpected outcomes, exceptions, or failures. By combining version control and the stability features of the language itself, different versions of configuration code can be switched as needed through Git, with audibility, to meet the needs of development, testing, and production stages. For example, rolling back to a verified version in case of anomalies. Code-based version control effectively prevents configuration drift.
  • Reusable and extensible: Configuration code often exhibits the characteristic of "write once, reuse multiple times". When combined with dynamically parameterized configuration code, the reuse of configurations in different environments and for different users becomes simple. Through integration with standard software supply chains such as OCI, configuration code is treated equally with business code, better achieving Infrastructure as Code (IaC).

In summary, the KRM KCL specification presents a new paradigm for dynamic configuration management. Built upon KRM and KCL, it offers a more efficient and reliable configuration solution for modern software development. With its dynamic configuration management capabilities, flexible syntax and semantics, and integration with Kubernetes, the KRM KCL specification provides developers with an enhanced configuration management experience in cloud-native application development and microservices architectures.

How to

krm-kcl-form

In the KRM KCL specification, we categorize the behaviors of the KCL configuration model into three main types:

  • Mutation: Takes input KCL parameters params and a KRM list, and outputs a modified KRM list.
  • Validation: Takes input KCL parameters params and a KRM list, and outputs a KRM list and resource validation results.
  • Abstraction: Takes input KCL parameters params and outputs a KRM list.

Using KCL, we can programmatically achieve the following capabilities:

  • Modify resources using KCL, such as adding/modifying label tags or annotation comments based on certain conditions, or injecting sidecar container configurations into all Kubernetes Resource Model (KRM) resources that contain PodTemplates.
  • Validate all KRM resources using KCL schemas, such as constraints for launching containers only in a root manner.
  • Generate KRM resources using an abstraction model or combine and use different KRM APIs.

Additionally, the configuration model source can reference OCI, Git, filesystem, and the original KCL code. With the help of KCL IDE and KPM package management tool, we can write models and upload them to OCI Registry for model reuse. These models can be used separately on the client-side or at runtime, depending on the specific scenario requirements.

Client-side

We will use a classic web service workload as an example to demonstrate the usage of the KRM KCL specification on the client-side.

In addition, we provide plugin support for Kubernetes community configuration management tools such as Kubectl, Helm, Kustomize, KPT, etc., in a unified programming interface. With just a few lines of configuration code, you can non-invasively edit and validate existing Kustomize YAML files, Helm Charts, define your own abstraction models, and share and reuse them.

Here, we will provide a detailed explanation using the integration of KCL with the Kubectl tool as an example. You can find the installation instructions for the Kubectl KCL plugin at here.

First, execute the following command to get a sample configuration:

git clone https://github.com/kcl-lang/kubectl-kcl.git && cd ./kubectl-kcl/examples/

Then, execute the following command to show the configuration:

$ cat krm-kcl-abstraction.yaml

apiVersion: krm.kcl.dev/v1alpha1
kind: KCLRun
metadata:
name: web-service
spec:
params:
name: app
containers:
nginx:
image: nginx
ports:
- containerPort: 80
service:
ports:
- port: 80
source: oci://ghcr.io/kcl-lang/web-service

In the above configuration, we use a pre-defined Kubernetes web service application abstraction model oci://ghcr.io/kcl-lang/web-service on the OCI registry, and configure the required fields for that model through the params field. By executing the following command, you can obtain the raw Kubernetes YAML output and deploy it to the cluster:

$ kubectl kcl apply -f krm-kcl-abstraction.yaml

deployment.apps/app created
service/app created

In addition to using YAML as the user input interface, KCL can be used as a DSL for writing configurations and policies. It allows developers and platform personnel to write and maintain configurations on a large scale using a unified KCL writing style on the client-side.

standalone-kcl-form

Runtime

At runtime, we provide Kubernetes cluster integration through the KCL Operator, allowing you to generate, mutate, or validate resources based on KCL configurations using an Access Webhook when applying resources to the cluster. The webhook captures create, apply, and edit operations and executes resources on the configuration associated with each operation using the KCLRun resource. With the KCL Operator, you can automate resource configuration management and security validation in a lightweight manner within a Kubernetes cluster using KCL code, without the need to develop a webhook server to dynamically modify and validate configurations at runtime.

Additionally, leveraging the modeling and abstraction capabilities of KCL, we can define functional abstractions/compositions for different resource APIs and expose them in the form of KCL schemas. Moreover, KCL schemas can be further automatically generated into OpenAPI schema definitions for other clients in the cluster to call, without the need for manually maintaining complex OpenAPI schema definitions for API abstractions/compositions.

Here is an example of using the KCL Operator to modify resource annotations.

Install the KCL Operator:

kubectl apply -f https://raw.githubusercontent.com/kcl-lang/kcl-operator/main/config/all.yaml

Use the following command to observe and wait for the pod status to be Running:

kubectl get po

Deploy the annotation mutation model:

kubectl apply -f- << EOF
apiVersion: krm.kcl.dev/v1alpha1
kind: KCLRun
metadata:
name: set-annotation
spec:
params:
annotations:
managed-by: kcl-operator
source: oci://ghcr.io/kcl-lang/set-annotation
EOF

Deploy a Pod resource to validate the model result:

kubectl apply -f- << EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx
annotations:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
EOF
kubectl get po nginx -o yaml | grep kcl-operator

We can see the following output:

    managed-by: kcl-operator

We can observe that the Nginx Pod automatically added the annotation managed-by=kcl-operator.

Summary

Dynamic configuration management can reduce the complexity of modern cloud-native configurations. With the KRM KCL specification and standard OCI models, we can achieve dynamic configuration management, allowing platform personnel and application developers to easily utilize these settings and reduce cognitive load.

Resources

Reference