Documentation
¶
Index ¶
- Constants
- Variables
- func NewDynamicRateLimiter[T comparable](opts EvictionQueueOptions) workqueue.TypedRateLimiter[T]
- func NewEvictionWorker(opts EvictionWorkerOptions) util.AsyncWorker
- func NewGracefulEvictionRateLimiter[T comparable](evictionOpts EvictionQueueOptions, rateLimiterOpts ratelimiterflag.Options) workqueue.TypedRateLimiter[T]
- type Controller
- func (c *Controller) ExecutionSpaceExistForCluster(ctx context.Context, clusterName string) (bool, error)
- func (c *Controller) Reconcile(ctx context.Context, req controllerruntime.Request) (controllerruntime.Result, error)
- func (c *Controller) SetupWithManager(mgr controllerruntime.Manager) error
- func (c *Controller) Start(ctx context.Context) error
- type DynamicRateLimiter
- type EvictionQueueOptions
- type EvictionWorkerOptions
- type NoExecuteTaintManager
Constants ¶
const ( // ControllerName is the controller name that will be used when reporting events and metrics. ControllerName = "cluster-controller" // MonitorRetrySleepTime is the amount of time the cluster controller that should // sleep between retrying cluster health updates. MonitorRetrySleepTime = 20 * time.Millisecond // HealthUpdateRetry controls the number of retries of writing cluster health update. HealthUpdateRetry = 5 )
const TaintManagerName = "taint-manager"
TaintManagerName is the controller name that will be used when reporting events and metrics.
Variables ¶
var ( // UnreachableTaintTemplateForSched is the taint for when a cluster becomes unreachable. // Used for taint based schedule. UnreachableTaintTemplateForSched = &corev1.Taint{ Key: clusterv1alpha1.TaintClusterUnreachable, Effect: corev1.TaintEffectNoSchedule, } // NotReadyTaintTemplateForSched is the taint for when a cluster is not ready for executing resources. // Used for taint based schedule. NotReadyTaintTemplateForSched = &corev1.Taint{ Key: clusterv1alpha1.TaintClusterNotReady, Effect: corev1.TaintEffectNoSchedule, } )
Functions ¶
func NewDynamicRateLimiter ¶ added in v1.16.0
func NewDynamicRateLimiter[T comparable](opts EvictionQueueOptions) workqueue.TypedRateLimiter[T]
NewDynamicRateLimiter creates a new DynamicRateLimiter with the given options.
func NewEvictionWorker ¶ added in v1.16.0
func NewEvictionWorker(opts EvictionWorkerOptions) util.AsyncWorker
NewEvictionWorker creates a new EvictionWorker with dynamic rate limiting.
func NewGracefulEvictionRateLimiter ¶ added in v1.16.0
func NewGracefulEvictionRateLimiter[T comparable]( evictionOpts EvictionQueueOptions, rateLimiterOpts ratelimiterflag.Options) workqueue.TypedRateLimiter[T]
NewGracefulEvictionRateLimiter creates a combined rate limiter for eviction. It uses the maximum delay from both dynamic and default rate limiters to ensure both cluster health and retry backoff are considered.
Types ¶
type Controller ¶
type Controller struct {
client.Client // used to operate Cluster resources.
EventRecorder record.EventRecorder
// ClusterMonitorPeriod represents cluster-controller monitoring period, i.e. how often does
// cluster-controller check cluster health signal posted from cluster-status-controller.
// This value should be lower than ClusterMonitorGracePeriod.
ClusterMonitorPeriod time.Duration
// ClusterMonitorGracePeriod represents the grace period after last cluster health probe time.
// If it doesn't receive update for this amount of time, it will start posting
// "ClusterReady==ConditionUnknown".
ClusterMonitorGracePeriod time.Duration
// When cluster is just created, e.g. agent bootstrap or cluster join, we give a longer grace period.
ClusterStartupGracePeriod time.Duration
// CleanupCheckInterval defines the fixed interval for polling resource deletion status during cluster removal.
// The fixed interval bypasses exponential backoff mechanism to ensure the check frequency remains balanced
// - neither too frequent to risk system overload nor too sparse to cause delays.
CleanupCheckInterval time.Duration
RateLimiterOptions ratelimiterflag.Options
// contains filtered or unexported fields
}
Controller is to sync Cluster.
func (*Controller) ExecutionSpaceExistForCluster ¶ added in v0.10.0
func (c *Controller) ExecutionSpaceExistForCluster(ctx context.Context, clusterName string) (bool, error)
ExecutionSpaceExistForCluster determine whether the execution space exists in the cluster
func (*Controller) Reconcile ¶
func (c *Controller) Reconcile(ctx context.Context, req controllerruntime.Request) (controllerruntime.Result, error)
Reconcile performs a full reconciliation for the object referred to by the Request. The Controller will requeue the Request to be processed again if an error is non-nil or Result.Requeue is true, otherwise upon completion it will remove the work from the queue.
func (*Controller) SetupWithManager ¶
func (c *Controller) SetupWithManager(mgr controllerruntime.Manager) error
SetupWithManager creates a controller and register to controller manager.
type DynamicRateLimiter ¶ added in v1.16.0
type DynamicRateLimiter[T comparable] struct { // contains filtered or unexported fields }
DynamicRateLimiter adjusts its rate based on the overall health of clusters. It implements the workqueue.RateLimiter interface with dynamic behavior.
func (*DynamicRateLimiter[T]) Forget ¶ added in v1.16.0
func (d *DynamicRateLimiter[T]) Forget(_ T)
Forget is a no-op as this rate limiter doesn't track individual items.
func (*DynamicRateLimiter[T]) NumRequeues ¶ added in v1.16.0
func (d *DynamicRateLimiter[T]) NumRequeues(_ T) int
NumRequeues always returns 0 as this rate limiter doesn't track retries.
func (*DynamicRateLimiter[T]) When ¶ added in v1.16.0
func (d *DynamicRateLimiter[T]) When(_ T) time.Duration
When determines how long to wait before processing an item. Returns a longer delay when the system is unhealthy.
type EvictionQueueOptions ¶ added in v1.16.0
type EvictionQueueOptions struct {
// ResourceEvictionRate is the number of resources to be evicted per second in a cluster failover scenario.
ResourceEvictionRate float32
}
EvictionQueueOptions holds the options that control the behavior of the graceful eviction queue based on the overall health of the clusters.
type EvictionWorkerOptions ¶ added in v1.16.0
type EvictionWorkerOptions struct {
// Name is the queue's name used for metrics and logging
Name string
// KeyFunc generates keys from objects for queue operations
KeyFunc util.KeyFunc
// ReconcileFunc processes keys from the queue
ReconcileFunc util.ReconcileFunc
// ResourceKindFunc returns resource metadata for metrics collection
ResourceKindFunc func(key any) (clusterName, resourceKind string)
// EvictionQueueOptions configures dynamic rate limiting behavior
EvictionQueueOptions EvictionQueueOptions
// RateLimiterOptions configures general rate limiter behavior
RateLimiterOptions ratelimiterflag.Options
}
EvictionWorkerOptions configures a new EvictionWorker instance.
type NoExecuteTaintManager ¶ added in v1.3.0
type NoExecuteTaintManager struct {
client.Client // used to operate Cluster resources.
EventRecorder record.EventRecorder
ClusterTaintEvictionRetryFrequency time.Duration
ConcurrentReconciles int
RateLimiterOptions ratelimiterflag.Options
EnableNoExecuteTaintEviction bool
NoExecuteTaintEvictionPurgeMode string
// EvictionQueueOptions contains options for dynamic rate limiting
EvictionQueueOptions EvictionQueueOptions
// contains filtered or unexported fields
}
NoExecuteTaintManager listens to Taint/Toleration changes and is responsible for removing objects from Clusters tainted with NoExecute Taints.
func (*NoExecuteTaintManager) Reconcile ¶ added in v1.3.0
func (tc *NoExecuteTaintManager) Reconcile(ctx context.Context, req reconcile.Request) (reconcile.Result, error)
Reconcile performs a full reconciliation for the object referred to by the Request. The Controller will requeue the Request to be processed again if an error is non-nil or Result.Requeue is true, otherwise upon completion it will remove the work from the queue.
func (*NoExecuteTaintManager) SetupWithManager ¶ added in v1.3.0
func (tc *NoExecuteTaintManager) SetupWithManager(mgr controllerruntime.Manager) error
SetupWithManager creates a controller and register to controller manager.