Skip to main content

71 posts tagged with "KCL"

View All Tags

· 9 min read

Introduction

Helm is a tool for generating deployable manifests for Kubernetes objects, which philosophically takes the task of generating the final manifests in two distinct forms. Helm is an imperative templating tool for managing Kubernetes packages called charts. Charts are a templated version of your yaml manifests with a subset of Go Templating mixed throughout, as well it is a package manager for kubernetes that can package, configure, and deploy/apply the helm charts onto kubernetes clusters.

In KCL, the user can directly write the configuration instead of template files with more tools and IDE plugin support that needs to be modified in the corresponding code in the corresponding place, eliminating the cost of reading basic YAML. At the same time, the user can reuse the configuration fragments by code, avoiding massive copying and pasting of YAML configuration. The information density is higher, and it is not easy to make mistakes through KCL.

A classic example of helm chart configuration management is used to explain the differences between Helm and KCL in Kubernetes resource configuration management.

Helm

Helm has the concepts of values.yaml and template. In general, the Helm chart project is generally a directory including a Chart.yaml.:

We can execute the following command line to obtain a typical Helm Chart project.

  • Create a directory named workload-helm to hold the chart project
# Create a directory to hold the chart project
mkdir workload-helm
# Create a workload-helm/Chart.yaml
cat <<EOF > workload-helm/Chart.yaml
apiVersion: v2
appVersion: 0.3.0
description: A helm chart to provision standard workloads.
name: workload
type: application
version: 0.3.0
EOF
# Create a workload-helm/values.yaml
cat <<EOF > workload-helm/values.yaml
service:
type: ClusterIP
ports:
- name: www
protocol: TCP
port: 80
targetPort: 80

containers:
my-container:
image:
name: busybox:latest
command: ["/bin/echo"]
args:
- "-c"
- "Hello World!"
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
EOF
  • Create a directory to hold templates
# Create a directory to hold templates
mkdir workload-helm/templates
# Create a workload-helm/templates/helpers.tpl
cat <<EOF > workload-helm/templates/helpers.tpl
{{/*
Expand the name of the chart.
*/}}
{{- define "workload.name" -}}
{{- default .Release.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "workload.fullname" -}}
{{- \$name := default .Chart.Name .Values.nameOverride }}
{{- if contains \$name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name \$name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "workload.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "workload.labels" -}}
helm.sh/chart: {{ include "workload.chart" . }}
{{ include "workload.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "workload.selectorLabels" -}}
app.kubernetes.io/name: {{ include "workload.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
EOF
cat <<EOF > workload-helm/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "workload.name" . }}
labels:
{{- include "workload.labels" . | nindent 4 }}
spec:
selector:
matchLabels:
{{- include "workload.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "workload.selectorLabels" . | nindent 8 }}
spec:
containers:
{{- range \$name, \$container := .Values.containers }}
- name: {{ \$name }}
image: "{{ $container.image.name }}"
{{- with \$container.command }}
command:
{{- toYaml \$container.command | nindent 12 }}
{{- end }}
{{- with \$container.args }}
args:
{{- toYaml \$container.args | nindent 12 }}
{{- end }}
{{- with \$container.env }}
env:
{{- toYaml \$container.env | nindent 12 }}
{{- end }}
{{- with \$container.volumeMounts }}
volumeMounts:
{{- toYaml \$container.volumeMounts | nindent 12 }}
{{- end }}
{{- with \$container.livenessProbe }}
livenessProbe:
{{- toYaml \$container.livenessProbe | nindent 12 }}
{{- end }}
{{- with \$container.readinessProbe }}
readinessProbe:
{{- toYaml \$container.readinessProbe | nindent 12 }}
{{- end }}
{{- with \$container.resources }}
resources:
{{- toYaml \$container.resources | nindent 12 }}
{{- end }}
{{- end }}
EOF
cat <<EOF > workload-helm/templates/service.yaml
{{ if .Values.service }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "workload.name" . }}
labels:
{{- include "workload.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
selector:
{{- include "workload.selectorLabels" . | nindent 4 }}
{{- with .Values.service.ports }}
ports:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
EOF

Thus, we can get a basic Helm chart directory

.
├── Chart.yaml
├── templates
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ └── service.yaml
└── values.yaml

We can display the real deployment configuration of through the following command.

helm template workload-helm

The output YAML is

---
# Source: workload-helm/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name
labels:
helm.sh/chart: workload-0.3.0
app.kubernetes.io/name: release-name
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "0.3.0"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: release-name
app.kubernetes.io/instance: release-name
ports:
- name: www
port: 80
protocol: TCP
targetPort: 80
---
# Source: workload-helm/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name
labels:
helm.sh/chart: workload-0.3.0
app.kubernetes.io/name: release-name
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "0.3.0"
app.kubernetes.io/managed-by: Helm
spec:
selector:
matchLabels:
app.kubernetes.io/name: release-name
app.kubernetes.io/instance: release-name
template:
metadata:
labels:
app.kubernetes.io/name: release-name
app.kubernetes.io/instance: release-name
spec:
containers:
- name: my-container
image: "busybox:latest"
command:
- /bin/echo
args:
- -c
- Hello World!
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi

KCL

In KCL, we provide the ability similar to Helm values.yaml to configure dynamic parameters through configuration files kcl.yaml.

We can execute the following command line to obtain a typical KCL project with the kcl.yaml.

  • Create a directory named workload-kcl to hold the KCL project
# Create a directory to hold the KCL project
mkdir workload-kcl
# Create a workload-kcl/kcl.yaml
cat <<EOF > workload-kcl/kcl.yaml
kcl_options:
- key: containers
value:
my-container:
image:
name: busybox:latest
command: ["/bin/echo"]
args:
- "-c"
- "Hello World!"
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi

- key: service
value:
type: ClusterIP
ports:
- name: www
protocol: TCP
port: 80
targetPort: 80
EOF
  • Create KCL files to hold kubernetes resources.
# Create a workload-kcl/deployment.k
cat <<EOF > workload-kcl/deployment.k
apiVersion = "apps/v1"
kind = "Deployment"
metadata = {
name = "release-name"
labels = {
"app.kubernetes.io/name" = "release-name"
"app.kubernetes.io/instance" = "release-name"
}
}
spec = {
selector.matchLabels = metadata.labels
template.metadata.labels = metadata.labels
template.spec.containers = [
{
name = n
image = container.image.name
command = container.command
command = container.args
env = container.env
resources = container.resources
} for n, container in option("containers") or {}
]
}
EOF
cat <<EOF > workload-kcl/service.k
apiVersion = "v1"
kind = "Service"
metadata = {
name = "release-name"
labels = {
"app.kubernetes.io/name" = "release-name"
"app.kubernetes.io/instance" = "release-name"
}
}
spec = {
selector.matchLabels = metadata.labels
type = option("service", default={})?.type
ports = option("service", default={})?.ports
}
EOF

In the above KCL code, we declare the apiVersion, kind, metadata, spec and other attributes of Kubernetes Deployment and Service resources, and assign the corresponding contents respectively. In particular, we assign metadata.labels to spec.selector.matchLabels and spec.template.metadata.labels. It can be seen that the data structure defined by KCL is more compact than Helm template or YAML, and configuration reuse can be realized by defining local variables.

In KCL, we can dynamically receive external parameters through conditional statements and the option builtin function, and set different configuration values to generate resources.

We can get the Deployment and Service resources throw the following command:

  • Deployment
kcl workload-kcl/deployment.k -Y workload-kcl/kcl.yaml

The output is

apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name
labels:
app.kubernetes.io/name: release-name
app.kubernetes.io/instance: release-name
spec:
selector:
matchLabels:
app.kubernetes.io/name: release-name
app.kubernetes.io/instance: release-name
template:
metadata:
labels:
app.kubernetes.io/name: release-name
app.kubernetes.io/instance: release-name
spec:
containers:
- name: my-container
image: busybox:latest
command:
- -c
- Hello World!
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
  • Service
kcl workload-kcl/service.k -Y workload-kcl/kcl.yaml

The output is

apiVersion: v1
kind: Service
metadata:
name: release-name
labels:
app.kubernetes.io/name: release-name
app.kubernetes.io/instance: release-name
spec:
selector:
matchLabels:
app.kubernetes.io/name: release-name
app.kubernetes.io/instance: release-name
type: ClusterIP
ports:
- name: www
protocol: TCP
port: 80
targetPort: 80

In addition, we can overwrite the value in the kcl.yaml file with the -D parameter, such as executing the following command.

kcl workload-kcl/service.k -Y workload-kcl/kcl.yaml -D service=None

The output is

apiVersion: v1
kind: Service
metadata:
name: release-name
labels:
app.kubernetes.io/name: release-name
app.kubernetes.io/instance: release-name
spec:
selector:
matchLabels:
app.kubernetes.io/name: release-name
app.kubernetes.io/instance: release-name
type: null
ports: null

Summary

It can be seen that, compared with Helm, KCL reduces the number of configuration files and code lines by means of code generation on the basis of configuration reuse and coverage, And like Helm, it is a pure client solution, which can move the configuration and policy verification to the left as far as possible without additional dependency or burden on the cluster, or even without a real Kubernetes cluster.

Helm can define reusable templates in the .tpl file and support other templates to reference it. However, only defined templates can be reused. In a complex Helm chart project, we need to define a lot of additional basic templates. Compared with the cumbersome writing method of Helm, all contents in KCL are variables. No additional syntax is required to specify templates. Any variables can be referenced to each other.

In addition, there are a large number of {{- include }}, nindent and toYaml tag characters that have nothing to do with actual logic in Helm. You need to calculate spaces and indents at each reference. In KCL, there are fewer useless codes, and there is no need for too many {{*}} to mark code blocks. The information density is higher, and the indentation and space have been completely liberated.

In fact, KCL and Helm are not antagonistic. We can even use KCL to write HelmRelease templates and provide programmable extension capabilities for existing Helm chart to write YAML validators.

Future Plan

We also expect that KCL models and constraints can be managed as a package (this package has only KCL files). For example, the Kubernetes models and constraints can be used out of the box. Users can generate configurations or verify existing configurations, and can simply extend the models and constraints users want through KCL inheritance.

At this stage, you can use tools such as Git or OCI Registry As Storage (ORAS) to manage KCL configuration versions.

More Documents

· 6 min read

Introduction

Kustomize provides a solution to customize the basic configuration and differential configuration of Kubernetes resources without templates. The configuration can be merged or overwritten through file-level YAML configuration with multiple strategies. In Kustomize, users need to know more about the content and location to be changed, For basic YAML with complex recursion too deep, it may not be easy to match Kustomize files through selectors.

In KCL, the user can directly write the configuration that needs to be modified in the corresponding code in the corresponding place, eliminating the cost of reading basic YAML. At the same time, the user can reuse the configuration fragments by code, avoiding massive copying and pasting of YAML configuration. The information density is higher, and it is not easy to make mistakes through KCL.

A classic example of Kustomize multi-environment configuration management is used to explain the differences between Kustomize and KCL in Kubernetes resource configuration management.

Kustomize

Kustomize has the concepts of base and overlay. In general, base and overlay are general a directory including a kustomization.yaml file. One base directory can be used by multiple overlay directories.

We can execute the following command line to obtain a typical Kustomize project

  • Create a base directory and create a deployment resource
# Create a directory to hold the base
mkdir base
# Create a base/deployment.yaml
cat <<EOF > base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ldap
labels:
app: ldap
spec:
replicas: 1
selector:
matchLabels:
app: ldap
template:
metadata:
labels:
app: ldap
spec:
containers:
- name: ldap
image: osixia/openldap:1.1.11
args: ["--copy-service"]
volumeMounts:
- name: ldap-data
mountPath: /var/lib/ldap
ports:
- containerPort: 389
name: openldap
volumes:
- name: ldap-data
emptyDir: {}
EOF
# Create a base/kustomization.yaml
cat <<EOF > base/kustomization.yaml
resources:
- deployment.yaml
EOF
  • Create a directory to hold the prod overlay configuration.
# Create a directory to hold the prod overlay
mkdir prod
# Create a prod/deployment.yaml
cat <<EOF > prod/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ldap
spec:
replicas: 6
template:
spec:
volumes:
- name: ldap-data
emptyDir: null
gcePersistentDisk:
readOnly: true
pdName: ldap-persistent-storage
EOF
cat <<EOF > prod/kustomization.yaml
resources:
- ../base
patchesStrategicMerge:
- deployment.yaml
EOF

Thus, we can get a basic Kustomize directory

.
├── base
│ ├── deployment.yaml
│ └── kustomization.yaml
└── prod
├── deployment.yaml
└── kustomization.yaml

The base directory stores the basic deployment configuration, and the prod environment stores the deployment configuration that needs to be overwritten. The metadata.name and other attributes such as spec.template.spec.volumes[0].name are used to indicate which resource to overwrite

We can display the real deployment configuration of the prod environment through the following command.

kubectl kustomize ./prod

The output is

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ldap
name: ldap
spec:
replicas: 6
selector:
matchLabels:
app: ldap
template:
metadata:
labels:
app: ldap
spec:
containers:
- args:
- --copy-service
image: osixia/openldap:1.1.11
name: ldap
ports:
- containerPort: 389
name: openldap
volumeMounts:
- mountPath: /var/lib/ldap
name: ldap-data
volumes:
- gcePersistentDisk:
pdName: ldap-persistent-storage
readOnly: true
name: ldap-data

We can also directly apply the configuration to the cluster through the following command.

kubectl apply -k ./prod

The output is

deployment.apps/ldap created

KCL

We can write the following KCL code and name it main.k.

apiVersion = "apps/v1"
kind = "Deployment"
metadata = {
name = "ldap"
labels.app = "ldap"
}
spec = {
replicas = 1
# When env is prod, override the `replicas` attribute with `6`
if option("env") == "prod": replicas = 6
# Assign `metadata.labels` to `selector.matchLabels`
selector.matchLabels = metadata.labels
template.metadata.labels = metadata.labels
template.spec.containers = [
{
name = metadata.name
image = "osixia/openldap:1.1.11"
args = ["--copy-service"]
volumeMounts = [{ name = "ldap-data", mountPath = "/var/lib/ldap" }]
ports = [{ containerPort = 80, name = "openldap" }]
}
]
template.spec.volumes = [
{
name = "ldap-data"
emptyDir = {}
# When env is prod
# override the `emptyDir` attribute with `None`
# patch a `gcePersistentDisk` attribute with the value `{readOnly = True, pdName = "ldap-persistent-storage"}`
if option("env") == "prod":
emptyDir = None
gcePersistentDisk = {
readOnly = True
pdName = "ldap-persistent-storage"
}
}
]
}

In the above KCL code, we declare the apiVersion, kind, metadata, spec and other attributes of a Kubernetes Deployment resource, and assign the corresponding contents respectively. In particular, we assign metadata.labels to spec.selector.matchLabels and spec.template.metadata.labels. It can be seen that the data structure defined by KCL is more compact than Kustomize or YAML, and configuration reuse can be realized by defining local variables.

In KCL, we can dynamically receive external parameters through conditional statements and the option builtin function, and set different configuration values for different environments to generate resources. For example, for the above code, we wrote a conditional statement and entered a dynamic parameter named env. When env is prod, we will overwrite the replicas attribute from 1 to 6, and make some adjustments to the volume configuration named ldap-data, such as changing the emptyDir attribute to None, and adding the configuration value of gcePersistentDisk.

We can use the following command to view diff between different environment configurations

diff \
<(kcl main.k) \
<(kcl main.k -D env=prod) |\
more

The output is

8c8
< replicas: 1
---
> replicas: 6
30c30,33
< emptyDir: {}
---
> emptyDir: null
> gcePersistentDisk:
> readOnly: true
> pdName: ldap-persistent-storage

It can be seen that the diff between the production environment configuration and the base configuration mainly lies in the attributes of replicas, emptyDir and gcePersistentDisk, which is consistent with the expectation.

In addition, we can use the -o parameter of the KCL command line tool to output the compiled YAML to a file and view the diff between files

# Generate base deployment
kcl main.k -o deployment.yaml
# Generate prod deployment
kcl main.k -o prod-deployment.yaml -D env=prod
# Diff prod deployment and base deployment
diff prod-deployment.yaml deployment.yaml

Of course, we can also use KCL tools together with kubectl and other tools to apply the configuration of the production environment to the cluster

kcl main.k -D env=prod | kubectl apply -f -

The output is

deployment.apps/ldap created

Finally, check the deployment status through kubectl

kubectl get deploy

The output is

NAME   READY   UP-TO-DATE   AVAILABLE   AGE
ldap 0/6 6 0 15s

It can be seen from the results of the command that it is completely consistent with the deployment experience of using Kustomize configuration and kubectl apply directly, and there are no more side effects.

Summary

This article briefly introduces the quick start of writing complex multi-environment Kubernetes configuration with KCL and the comparison of Kustomize tool for Kubernetes multi-environment configuration management.

It can be seen that, compared with Kustomize, KCL reduces the number of configuration files and code lines by means of code generation on the basis of configuration reuse and coverage, And like Kustomize, it is a pure client solution, which can move the configuration and policy verification to the left as far as possible without additional dependency or burden on the cluster, or even without a real Kubernetes cluster.