README
¶
Resource Interpreter Webhook
This document uses an example of a resource interpreter webhook to show users its usage. In the example, we process a CustomResourceDefinition(CRD) resource named Workload. Users can implement their own resource interpreter webhook component based on their own business, taking karmada-interpreter-webhook-example as an example.
Document introduction
examples/customresourceinterpreter/
│
├── apis/ # API Definition
│ ├── workload/ # `Workload` API Definition
│ │ ├── v1alpha1 # `Workload` v1alpha1 version API Definition
│ │ | ├── doc.go # API Package Introduction
│ │ | ├── workload_types.go # example `Workload` API Definition
│ │ | ├── zz_generated.deepcopy.go # generated by `deepcopy-gen`
| | | └── zz_generated.register.go # generated by `register-gen`
│ └── └── workload.example.io_workloads.yaml # `Workload` CustomResourceDefinition, generated by `controller-gen crd`
│
├── webhook/ # demo for `karmada-interpreter-webhook-example` component
│
├── karmada-interpreter-webhook-example.yaml # component deployment configuration file
├── README.md # README file
├── webhook-configuration.yaml # ResourceInterpreterWebhookConfiguration configuration file
├── workload-sample.yaml # `Workload` resource example
└── workload-propagationpolicy.yaml # `PropagationPolicy` resource example to propagate `Workload` resource
Install
For a Karmada instance, the cluster where the Karmada component is deployed is called karmada-host cluster.
This document uses the Karmada instance installed in hack/local-up-karmada.sh mode as an example, there are karmada-host, karmada-apiserver and three member clusters named member1, member2 and member3.
Note: If you use other installation methods, please adapt your installation method proactively.
Prerequisites
Considering that there is a Pull type cluster in the cluster, it is necessary to set up a LoadBalancer type Service for karmada-interpreter-webhook-example so that all clusters can access the resource interpreter webhook service. In this document, we deploy MetalLB to expose the webhook service.
If all your clusters are Push type clusters, you can access the webhook service in the karmada-host cluster through Service without configuring additional MetalLB.
Please run the following script to deploy MetalLB.
kubectl --context="karmada-host" get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl --context="karmada-host" apply -n kube-system -f -
curl https://raw.githubusercontent.com/metallb/metallb/v0.13.5/config/manifests/metallb-native.yaml -k | \
sed '0,/args:/s//args:\n - --webhook-mode=disabled/' | \
sed '/apiVersion: admissionregistration/,$d' | \
kubectl --context="karmada-host" apply -f -
export interpreter_webhook_example_service_external_ip_address=$(kubectl config view --template='{{range $_, $value := .clusters }}{{if eq $value.name "karmada-apiserver"}}{{$value.cluster.server}}{{end}}{{end}}' | \
awk -F/ '{print $3}' | \
sed 's/:.*//' | \
awk -F. '{printf "%s.%s.%s.8",$1,$2,$3}')
cat <<EOF | kubectl --context="karmada-host" apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: metallb-config
namespace: metallb-system
spec:
addresses:
- ${interpreter_webhook_example_service_external_ip_address}-${interpreter_webhook_example_service_external_ip_address}
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: metallb-advertisement
namespace: metallb-system
EOF
Deploy karmada-interpreter-webhook-example
Step1: Install Workload CRD
Install Workload CRD in karmada-apiserver by running the following command:
kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver apply --server-side -f examples/customresourceinterpreter/apis/workload.example.io_workloads.yaml
And then, create a ClusterPropagationPolicy resource object to propagate Workload CRD to all member clusters:
workload-crd-cpp.yaml
apiVersion: policy.karmada.io/v1alpha1
kind: ClusterPropagationPolicy
metadata:
name: workload-crd-cpp
spec:
resourceSelectors:
- apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
name: workloads.workload.example.io
placement:
clusterAffinity:
clusterNames:
- member1
- member2
- member3
kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver apply -f workload-crd-cpp.yaml
Step2: Deploy webhook configuration in karmada-apiserver
We can tell Karmada how to access the resource interpreter webhook service by configuring ResourceInterpreterWebhookConfiguration. The configuration template is as follows:
apiVersion: config.karmada.io/v1alpha1
kind: ResourceInterpreterWebhookConfiguration
metadata:
name: examples
webhooks:
- name: workloads.example.com
rules:
- operations: [ "InterpretReplica","ReviseReplica","Retain","AggregateStatus", "InterpretHealth", "InterpretStatus", "InterpretDependency" ]
apiGroups: [ "workload.example.io" ]
apiVersions: [ "v1alpha1" ]
kinds: [ "Workload" ]
clientConfig:
url: https://{{karmada-interpreter-webhook-example-svc-address}}:443/interpreter-workload
caBundle: {{caBundle}}
interpreterContextVersions: [ "v1alpha1" ]
timeoutSeconds: 3
If you only need to access the resource interpreter webhook service in the Karmada-host cluster, you can directly configure clientConfig with the Service domain name in the cluster:
clientConfig:
url: https://karmada-interpreter-webhook-example.karmada-system.svc:443/interpreter-workload
caBundle: {{caBundle}}
Alternatively, you can also declare service in clientConfig:
clientConfig:
caBundle: {{caBundle}}
service:
namespace: karmada-system
name: karmada-interpreter-webhook-example
port: 443
path: /interpreter-workload
You can deploy a ExternalName type Service in karmada-apiserver:
apiVersion: v1
kind: Service
metadata:
name: karmada-interpreter-webhook-example
namespace: karmada-system
spec:
type: ExternalName
externalName: karmada-interpreter-webhook-example.karmada-system.svc.cluster.local
Or you do not need to deploy any Service in karmada-apiserver, it will fall back to standard Kubernetes service DNS name format: https://karmada-interpreter-webhook-example.karmada-system.svc:443/interpreter-workload.
In the example of this article, you can directly run the following script to deploy ResourceInterpreterWebhookConfiguration:
webhook-configuration.sh
#!/usr/bin/env bash
set -euo pipefail
CA_FILE="${HOME}/.karmada/ca.crt"
KUBECONFIG="${HOME}/.kube/karmada.config"
TEMPLATE="examples/customresourceinterpreter/webhook-configuration.yaml"
# basic checks
if [[ ! -f "$CA_FILE" ]]; then
echo "ERROR: CA file not found: $CA_FILE" >&2
exit 2
fi
if [[ ! -f "$TEMPLATE" ]]; then
echo "ERROR: Template not found: $TEMPLATE" >&2
exit 2
fi
if ! command -v kubectl >/dev/null 2>&1; then
echo "ERROR: kubectl not found in PATH" >&2
exit 2
fi
TMPFILE="$(mktemp /tmp/interpreter-webhook-config-sample-XXX.yaml)"
trap 'rm -f "$TMPFILE" "${TMPFILE}.bak"' EXIT
# single-line base64 (portable)
CA_B64=$(base64 < "$CA_FILE" | tr -d '\n')
# read karmada-apiserver server from kubeconfig (remove port)
CLUSTER_SERVER=$(kubectl --kubeconfig "$KUBECONFIG" config view --template='{{range $_, $value := .clusters }}{{if eq $value.name "karmada-apiserver"}}{{$value.cluster.server}}{{end}}{{end}}' 2>/dev/null || true)
if [[ -z "${CLUSTER_SERVER:-}" ]]; then
echo "ERROR: cannot find karmada-apiserver.cluster.server in kubeconfig" >&2
exit 3
fi
HOST=$(printf "%s" "$CLUSTER_SERVER" | awk -F/ '{print $3}' | sed 's/:.*$//')
# if HOST looks like IPv4, change last octet to 8; otherwise keep host as-is
if [[ "$HOST" =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}$ ]]; then
WEBHOOK_IP=$(printf "%s" "$HOST" | awk -F. '{printf "%s.%s.%s.8",$1,$2,$3}')
else
WEBHOOK_IP="$HOST"
fi
# prepare and replace. use '|' delimiter to avoid '/' conflict with base64.
cp "$TEMPLATE" "$TMPFILE"
sed -i.bak \
-e "s|{{caBundle}}|${CA_B64}|g" \
-e "s|{{karmada-interpreter-webhook-example-svc-address}}|${WEBHOOK_IP}|g" \
"$TMPFILE"
rm -f "${TMPFILE}.bak"
echo "----- YAML content begin -----"
cat "$TMPFILE"
echo "----- YAML content end -----"
# apply
kubectl --kubeconfig "$KUBECONFIG" --context karmada-apiserver apply -f "$TMPFILE"
echo "Applied $TMPFILE"
chmod +x webhook-configuration.sh
./webhook-configuration.sh
Step3: Deploy karmada-interpreter-webhook-example in karmada-host
Run the following command:
kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-host apply -f examples/customresourceinterpreter/karmada-interpreter-webhook-example.yaml
Note:
karmada-interpreter-webhook-exampleis just a demo for testing and reference. If you plan to use the interpreter webhook, please implement specific components based on your business needs.
In the current example, the interpreter webhook is deployed under the namespace karmada-system. If you are trying to deploy the interpreter webhook in a namespace other than the default karmada-system namespace, and use the domain address of Service in the URL. Such as (take the test namespace as an example):
apiVersion: config.karmada.io/v1alpha1
kind: ResourceInterpreterWebhookConfiguration
metadata:
name: examples
webhooks:
- name: workloads.example.com
rules:
- operations: [ "InterpretReplica","ReviseReplica","Retain","AggregateStatus", "InterpretHealth", "InterpretStatus", "InterpretDependency" ]
apiGroups: [ "workload.example.io" ]
apiVersions: [ "v1alpha1" ]
kinds: [ "Workload" ]
clientConfig:
url: https://karmada-interpreter-webhook-example.test.svc.cluster.local:443/interpreter-workload # domain address here
caBundle: {{caBundle}}
interpreterContextVersions: [ "v1alpha1" ]
timeoutSeconds: 3
Please set the correct certificate and add the domain address to the CN field of the certificate.
In the testing environment of Karmada, this is controlled in script hack/deploy-karmada.sh:
We recommend that you deploy the interpreter webhook component and Karmada control plane components in the same namespace. If you need to deploy them in different namespaces, please plan ahead when generating certificates.
The relevant problem description has been recorded in #4478, please refer to it.
At this point, you have successfully installed the karmada-interpreter-webhook-example service and can start using it.
Usage
Propagate the Workload resource to the member clusters and verify the interpretation:
Create the Workload CR:
kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver apply -f examples/customresourceinterpreter/workload-sample.yaml
Create the PropagationPolicy to propagate the workload:
kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver apply -f examples/customresourceinterpreter/workload-propagationpolicy.yaml
InterpretReplica
You can get ResourceBinding to check if the replicas field is interpreted successfully.
kubectl get rb nginx-workload -o yaml
ReviseReplica
You can check if the replicas field of Workload object is revised to 1 in all member clusters.
kubectl --kubeconfig $HOME/.kube/members.config --context member1 get workload nginx --template={{.spec.replicas}}
Retain
Update spec.paused of Workload object in member1 cluster to true.
kubectl --kubeconfig $HOME/.kube/members.config --context member1 patch workload nginx --type='json' -p='[{"op": "replace", "path": "/spec/paused", "value":true}]'
Check if it is retained successfully.
kubectl --kubeconfig $HOME/.kube/members.config --context member1 get workload nginx --template={{.spec.paused}}
InterpretStatus
There is no Workload controller deployed on member clusters, so in order to simulate the Workload CR handling,
we will manually update status.readyReplicas of Workload object in member1 cluster to 1.
kubectl proxy --port=8001 &
curl http://127.0.0.1:8001/apis/workload.example.io/v1alpha1/namespaces/default/workloads/nginx/status -XPATCH -d'{"status":{"readyReplicas": 1}}' -H "Content-Type: application/merge-patch+json
Then you can get ResourceBinding to check if the status.aggregatedStatus[x].status field is interpreted successfully.
kubectl get rb nginx-workload --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver -o yaml
You can also check the status.manifestStatuses[x].status field of Karmada Work object in namespace karmada-es-member1.
InterpretHealth
You can get ResourceBinding to check if the status.aggregatedStatus[x].health field is interpreted successfully.
kubectl get rb nginx-workload --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver -o yaml
You can also check the status.manifestStatuses[x].health field of Karmada Work object in namespace karmada-es-member1.
AggregateStatus
You can check if the status field of Workload object is aggregated correctly.
kubectl get workload nginx --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver -o yaml
Note: If you want to use
Retain/InterpretStatus/InterpretHealthfunction in Pull mode cluster, you need to deploy karmada-interpreter-webhook-example in the Pull mode cluster.
InterpretComponent
To test the InterpretComponent operation, first ensure the MultiplePodTemplatesScheduling feature gate is enabled for the karmada-controller-manager.
Next, update the ResourceInterpreterWebhookConfiguration to include the InterpretComponent operation. You can do this by editing the resource directly:
kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver edit resourceinterpreterwebhookconfiguration examples
Ensure the operations array in the webhook rule includes InterpretComponent:
- operations: [ "InterpretReplica", "InterpretComponent", "ReviseReplica", "Retain", "AggregateStatus", "InterpretHealth", "InterpretStatus", "InterpretDependency" ]
After updating the webhook configuration, you need to trigger a reconciliation for the Workload resource to ensure the components field is populated. You can do this by modifying a field in the Workload specification, such as spec.replicas:
kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver patch workload nginx --type='json' -p='[{"op": "replace", "path": "/spec/replicas", "value":5}]'
Once the resource is reconciled, Karmada will call the webhook for the InterpretComponent operation. You can verify that the components field in the ResourceBinding is interpreted correctly by inspecting the resource:
kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver get rb nginx-workload -o yaml
Note: When
InterpretComponentis defined for a resource, it takes precedence overInterpretReplica. As a result, thereplicasandreplicaRequirementsfields will not be interpreted by theInterpretReplicaoperation.
Directories
¶
| Path | Synopsis |
|---|---|
|
apis
|
|
|
workload/v1alpha1
Package v1alpha1 is the v1alpha1 version of the API.
|
Package v1alpha1 is the v1alpha1 version of the API. |