Karpenter Provider Linode
PLEASE NOTE: This project is considered ALPHA quality and should NOT be used for production, as it is currently in active development. Use at your own risk. APIs, configuration file formats, and functionality are all subject to change frequently. That said, please try it out in your development and test environments and let us know if it works. Contributions welcome! Thanks!
Table of contents:
Features Overview
The LKE Karpenter Provider enables node autoprovisioning using Karpenter on your LKE cluster.
Karpenter improves the efficiency and cost of running workloads on Kubernetes clusters by:
- Watching for pods that the Kubernetes scheduler has marked as unschedulable
- Evaluating scheduling constraints (resource requests, nodeselectors, affinities, tolerations, and topology spread constraints) requested by the pods
- Provisioning nodes that meet the requirements of the pods
- Removing the nodes when the nodes are no longer needed
Provider Modes
This provider supports two operating modes:
- LKE Mode (Default): Creates LKE Node Pools for each provisioned node. This is the simplest method and recommended for most users.
- Instance Mode: Creates standard Linode Instances. This offers granular control over instance settings (SSH keys, placement groups, etc.) but requires more manual configuration. This is currently in development and not yet fully functional.
See Configuration Documentation for full details on modes and available settings.
Installation
Install these tools before proceeding:
Create a cluster
- Create a new LKE cluster with any amount of nodes in any region.
This can be easily done in Linode Cloud Manager or via the Linode CLI.
- Download the cluster's kubeconfig when ready.
The Karpenter Helm chart requires specific configuration values to work with an LKE cluster.
-
Create a Linode PAT if you don't already have a LINODE_TOKEN env var set. Karpenter will use this for managing nodes in the LKE cluster.
-
Set the variables:
export CLUSTER_NAME=<cluster name>
export KUBECONFIG=<path to your LKE kubeconfig>
export KARPENTER_NAMESPACE=kube-system
export LINODE_TOKEN=<your api token>
# Optional: specify region explicitly (auto-discovered in LKE mode if not set)
# export LINODE_REGION=<region>
# Optional: Set mode directly (default is lke)
# export KARPENTER_MODE=lke
Note: In LKE mode (default), Karpenter automatically discovers the cluster region from the Linode API using the cluster name. You can optionally set LINODE_REGION to override this behavior.
Install Karpenter
Use the configured environment variables to install Karpenter using Helm:
helm upgrade --install --namespace "${KARPENTER_NAMESPACE}" --create-namespace karpenter-crd charts/karpenter-crd
helm upgrade --install --namespace "${KARPENTER_NAMESPACE}" --create-namespace karpenter charts/karpenter \
--set settings.clusterName=${CLUSTER_NAME} \
--set apiToken=${LINODE_TOKEN} \
--wait
Optional Configuration:
-
Region: Specify the region explicitly (only required for instance mode):
--set region=${LINODE_REGION}
-
Mode: Choose the operating mode (default is lke):
lke: Provisions nodes using LKE NodePools (recommended for LKE clusters)
instance: Provisions nodes as direct Linode instances
--set settings.mode=lke
Check karpenter deployed successfully:
kubectl get pods --namespace "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter
Check its logs:
kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller
Using Karpenter
Create NodePool
A single Karpenter NodePool is capable of handling many different pod shapes. Karpenter makes scheduling and provisioning decisions based on pod attributes such as labels and affinity. In other words, Karpenter eliminates the need to manage many different node groups.
Create a default NodePool using the command below. (Additional examples available in the repository under examples/v1.) The consolidationPolicy set to WhenUnderutilized in the disruption block configures Karpenter to reduce cluster cost by removing and replacing nodes. As a result, consolidation will terminate any empty nodes on the cluster. This behavior can be disabled by setting consolidateAfter to Never, telling Karpenter that it should never consolidate nodes.
Note: This NodePool will create capacity as long as the sum of all created capacity is less than the specified limit.
cat <<EOF | kubectl apply -f -
---
apiVersion: karpenter.k8s.linode/v1alpha1
kind: LinodeNodeClass
metadata:
name: default
spec:
image: "linode/ubuntu22.04"
---
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: default
spec:
template:
spec:
requirements:
- key: kubernetes.io/arch
operator: In
values: ["amd64"]
- key: kubernetes.io/os
operator: In
values: ["linux"]
nodeClassRef:
group: karpenter.k8s.linode
kind: LinodeNodeClass
name: default
expireAfter: 720h # 30 * 24h = 720h
limits:
cpu: 1000
EOF
Karpenter is now active and ready to begin provisioning nodes.
Scale up deployment
This deployment uses the pause image and starts with zero replicas.
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: inflate
spec:
replicas: 0
selector:
matchLabels:
app: inflate
template:
metadata:
labels:
app: inflate
spec:
terminationGracePeriodSeconds: 0
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
containers:
- name: inflate
image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
resources:
requests:
cpu: 1
securityContext:
allowPrivilegeEscalation: false
EOF
kubectl scale deployment inflate --replicas 5
kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller
Scale down deployment
Now, delete the deployment. After a short amount of time, Karpenter should terminate the empty nodes due to consolidation.
kubectl delete deployment inflate
kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller
Delete Karpenter nodes manually
If you delete a node with kubectl, Karpenter will gracefully cordon, drain, and shutdown the corresponding instance. Under the hood, Karpenter adds a finalizer to the node object, which blocks deletion until all pods are drained and the instance is terminated. Keep in mind, this only works for nodes provisioned by Karpenter.
kubectl delete node $NODE_NAME
Cleanup
Delete the cluster
To avoid additional charges, remove the demo infrastructure from your Linode account.
helm uninstall karpenter --namespace "${KARPENTER_NAMESPACE}"
linode-cli lke cluster-delete --label "${CLUSTER_NAME}"
Known issues
A duplicate NodeClaim (Linode instance) MAY be temporarily provisioned on Linode until Karpenter detects the original was able to register the original successfully. This is because:
- Time from instance creation to that instance actually joining the cluster is SLOW (can take over 3 minutes for even non-GPU instances)
- LKE standard does not yet support adding start-up taints to Kubelet (
karpenter.sh/unregistered in particular is needed) to tell Karpenter to not go and create an extra NodeClaim because registration for the original is taking so long.
To address this gap in the meantime, we've configured the default BATCH_IDLE_DURATION and BATCH_MAX_DURATION for Karpenter to be quite long to avoid impatiently creating new NodeClaims (see https://karpenter.sh/docs/reference/settings/ to read about these settings).
The trade-off of this approach is that while duplicate NodeClaims are less likely to be created, Pods will be stuck in Pending for an extra 1 minute before a NodeClaim is created and subsquent instance creation request kicked off (BATCH_IDLE_DURATION=1m)
If a duplicate is created still (less likely, but still possible if instances take exceptionally long to join the cluster), the duplicate NodeClaim does get cleaned up after about a minute when Karpenter realizes it's not needed. You will see something like this in the Karpenter controller logs:
{"level":"INFO","time":"2026-01-26T19:42:57.156Z","logger":"controller","message":"launched nodeclaim","commit":"237f3a9","controller":"nodeclaim.lifecycle","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","NodeClaim":{"name":"default-v2blg"},"namespace":"","name":"default-v2blg","reconcileID":"e85e6c72-8da1-4fea-af26-1fb0e676d502","provider-id":"linode://90601036","instance-type":"g6-standard-6","zone":"","capacity-type":"on-demand","allocatable":{"cpu":"5915m","memory":"13590Mi","pods":"110"}}
{"level":"ERROR","time":"2026-01-26T19:44:22.686Z","logger":"controller","message":"node claim registration error","commit":"237f3a9","controller":"nodeclaim.lifecycle","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","NodeClaim":{"name":"default-v2blg"},"namespace":"","name":"default-v2blg","reconcileID":"e3ab7681-c8fb-489a-9f6b-63036fa52090","provider-id":"linode://90601036","taint":"karpenter.sh/unregistered","error":"missing taint prevents registration-related race conditions on Karpenter-managed nodes"}
{"level":"INFO","time":"2026-01-26T19:44:22.705Z","logger":"controller","message":"registered nodeclaim","commit":"237f3a9","controller":"nodeclaim.lifecycle","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","NodeClaim":{"name":"default-v2blg"},"namespace":"","name":"default-v2blg","reconcileID":"e3ab7681-c8fb-489a-9f6b-63036fa52090","provider-id":"linode://90601036","Node":{"name":"lke561146-819072-4a8e5fd50000"}}
Source Attribution
Notice: Files in this source code originated from a fork of https://github.com/aws/karpenter-provider-aws
which is under an Apache 2.0 license. Those files have been modified to reflect environmental requirements in LKE and Linode.
Community, discussion, contribution, and support
This project follows the Linode Community Code of Conduct.
Come discuss Karpenter in the #karpenter channel in the Kubernetes slack!
Check out the Docs to learn more.