optimized

package
v0.3.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 4, 2025 License: Apache-2.0 Imports: 22 Imported by: 0

README

Optimized Kubernetes Controllers

This package contains optimized versions of the Nephoran Intent Operator controllers designed to achieve significant performance improvements through intelligent backoff strategies, batched operations, and enhanced monitoring.

Performance Improvements

Target Metrics
  • 50% reduction in controller CPU usage
  • 40% reduction in API server load
  • Sub-2-second P95 latency for intent processing
  • Support for 200+ concurrent intents
Key Optimizations
1. Intelligent Exponential Backoff (backoff_manager.go)
  • Error Classification: Automatically classifies errors into categories (transient, permanent, resource, throttling, validation)
  • Strategy-Based Backoff: Different backoff strategies (exponential, linear, constant) based on error type
  • Jitter Integration: Prevents thundering herd problems with configurable jitter
  • Maximum Limits: Configurable maximum backoff delays for different scenarios
  • Automatic Cleanup: Removes stale backoff entries to prevent memory leaks
// Example usage
backoffManager := NewBackoffManager()
errorType := backoffManager.ClassifyError(err)
delay := backoffManager.GetNextDelay(resourceKey, errorType, err)
return ctrl.Result{RequeueAfter: delay}, err
2. Batched Status Updates (status_batcher.go)
  • Queue-Based Processing: Groups status updates into batches to reduce API calls
  • Priority System: Critical updates bypass batching for immediate processing
  • Automatic Batching: Triggers based on batch size, timeout, or priority
  • Retry Logic: Failed updates are retried with exponential backoff
  • Resource-Specific Updates: Optimized update functions for different resource types
// Example usage
statusBatcher := NewStatusBatcher(client, DefaultBatchConfig)
statusBatcher.QueueNetworkIntentUpdate(namespacedName, conditions, phase, HighPriority)
3. Performance Metrics (performance_metrics.go)
  • Comprehensive Monitoring: Tracks reconcile duration, API call latency, backoff delays
  • Prometheus Integration: All metrics exposed via Prometheus for monitoring
  • Resource Tracking: Monitors active reconcilers, memory usage, goroutine counts
  • Error Categorization: Detailed error tracking by type and category
4. Optimized Controllers
  • Object Pooling: Reuses reconcile context objects to reduce allocations
  • Caching: In-memory caching for frequently accessed resources
  • Batch Operations: Groups API operations to reduce server round trips
  • Adaptive Intervals: Intelligent requeue intervals based on resource state

Components

BackoffManager

Manages exponential backoff with error classification:

type BackoffConfig struct {
    Strategy       BackoffStrategy
    BaseDelay      time.Duration
    MaxDelay       time.Duration
    Multiplier     float64
    JitterEnabled  bool
    MaxRetries     int
}

Error Types:

  • TransientError: Network timeouts, temporary unavailability
  • PermanentError: Invalid configuration, auth failures
  • ResourceError: Resource conflicts, quota exceeded
  • ThrottlingError: API rate limiting
  • ValidationError: Schema validation failures
StatusBatcher

Batches status updates to reduce API server load:

type BatchConfig struct {
    MaxBatchSize     int           // Maximum updates per batch
    BatchTimeout     time.Duration // Maximum wait time
    FlushInterval    time.Duration // Periodic flush interval
    MaxRetries       int           // Retry attempts
    EnablePriority   bool          // Priority-based processing
}

Priority Levels:

  • CriticalPriority: Immediate processing, bypasses batching
  • HighPriority: Processed first in batches
  • MediumPriority: Standard priority
  • LowPriority: Processed last
ControllerMetrics

Comprehensive performance monitoring:

Key Metrics:

  • controller_reconcile_duration_seconds: Reconcile loop timing
  • controller_backoff_delay_seconds: Backoff delay distribution
  • controller_status_batch_size: Status update batch sizes
  • controller_api_call_duration_seconds: API call latency
  • controller_active_reconcilers: Concurrent reconciler count

Usage

NetworkIntent Controller
reconciler := NewOptimizedNetworkIntentReconciler(
    client, scheme, recorder, config, deps)

// Setup with manager
reconciler.SetupWithManager(mgr)

// Graceful shutdown
defer reconciler.Shutdown()
E2NodeSet Controller
reconciler := NewOptimizedE2NodeSetReconciler(
    client, scheme, recorder)

// Setup with manager  
reconciler.SetupWithManager(mgr)

// Graceful shutdown
defer reconciler.Shutdown()

Configuration

BackoffManager Configuration
// Custom backoff configuration
config := BackoffConfig{
    Strategy:      ExponentialBackoff,
    BaseDelay:     1 * time.Second,
    MaxDelay:      5 * time.Minute,
    Multiplier:    2.0,
    JitterEnabled: true,
    MaxRetries:    5,
}

backoffManager := NewBackoffManager()
backoffManager.SetConfig(TransientError, config)
StatusBatcher Configuration
// Custom batch configuration
config := BatchConfig{
    MaxBatchSize:   10,
    BatchTimeout:   2 * time.Second,
    FlushInterval:  5 * time.Second,
    MaxRetries:     3,
    EnablePriority: true,
    MaxQueueSize:   1000,
}

statusBatcher := NewStatusBatcher(client, config)

Benchmarks

Run benchmarks to validate performance improvements:

# Run all benchmarks
go test -bench=. -benchmem ./pkg/controllers/optimized/

# Specific benchmarks
go test -bench=BenchmarkOptimizedNetworkIntentController -benchmem
go test -bench=BenchmarkBackoffManager -benchmem
go test -bench=BenchmarkStatusBatcher -benchmem
Expected Results
BenchmarkOptimizedNetworkIntentController/SingleReconcile-8         	    1000	   1234567 ns/op	   12345 B/op	     123 allocs/op
BenchmarkOptimizedNetworkIntentController/ConcurrentReconcile-5-8   	     500	   2345678 ns/op	   23456 B/op	     234 allocs/op
BenchmarkBackoffManager/GetNextDelay-8                             	 1000000	      1234 ns/op	     123 B/op	       1 allocs/op
BenchmarkStatusBatcher/QueueUpdate-8                               	  500000	      2345 ns/op	     234 B/op	       2 allocs/op

Monitoring

Grafana Dashboard Queries

Reconcile Performance:

histogram_quantile(0.95, 
  rate(controller_reconcile_duration_seconds_bucket[5m])
) by (controller)

Error Rates:

rate(controller_reconcile_errors_total[5m]) by (controller, error_type)

Backoff Frequency:

rate(controller_reconcile_requeue_total[5m]) by (controller, backoff_strategy)

API Call Efficiency:

rate(controller_api_call_total[5m]) by (controller, operation, result)
Alerting Rules
groups:
- name: controller_performance
  rules:
  - alert: HighReconcileLatency
    expr: |
      histogram_quantile(0.95, 
        rate(controller_reconcile_duration_seconds_bucket[5m])
      ) > 2
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Controller reconcile latency is high"
      
  - alert: HighErrorRate  
    expr: |
      rate(controller_reconcile_errors_total[5m]) / 
      rate(controller_reconcile_total[5m]) > 0.1
    for: 2m
    labels:
      severity: critical
    annotations:
      summary: "Controller error rate is high"

Architecture

graph TB
    A[Controller Request] --> B[BackoffManager]
    B --> C{Error Classification}
    C -->|Transient| D[Exponential Backoff]
    C -->|Resource| E[Linear Backoff]
    C -->|Permanent| F[Constant Backoff]
    
    A --> G[Reconcile Logic]
    G --> H[StatusBatcher]
    H --> I{Priority Check}
    I -->|Critical| J[Immediate Update]
    I -->|Others| K[Batch Queue]
    K --> L[Periodic Flush]
    
    G --> M[ControllerMetrics]
    M --> N[Prometheus]
    
    O[Object Pool] --> G
    P[Cache Layer] --> G

Migration Guide

From Original Controllers
  1. Replace Controller Creation:
// Old
reconciler := &controllers.NetworkIntentReconciler{
    Client:   mgr.GetClient(),
    Scheme:   mgr.GetScheme(),
    Recorder: mgr.GetEventRecorderFor("networkintent-controller"),
}

// New
reconciler := NewOptimizedNetworkIntentReconciler(
    mgr.GetClient(),
    mgr.GetScheme(), 
    mgr.GetEventRecorderFor("networkintent-controller"),
    config,
    deps,
)
  1. Update Setup:
// Add graceful shutdown
defer reconciler.Shutdown()

// Setup remains the same
err = reconciler.SetupWithManager(mgr)
  1. Update Monitoring:
  • Import optimized metrics
  • Update Grafana dashboards
  • Configure new alerting rules

Best Practices

Resource Keys

Use consistent resource keys for backoff management:

resourceKey := fmt.Sprintf("%s/%s", req.Namespace, req.Name)
Error Handling

Let the BackoffManager classify errors automatically:

errorType := r.backoffManager.ClassifyError(err)
delay := r.backoffManager.GetNextDelay(resourceKey, errorType, err)
return ctrl.Result{RequeueAfter: delay}, err
Status Updates

Use appropriate priorities for status updates:

// Critical updates (errors, completion)
r.statusBatcher.QueueUpdate(..., CriticalPriority)

// Progress updates
r.statusBatcher.QueueUpdate(..., MediumPriority)

// Informational updates
r.statusBatcher.QueueUpdate(..., LowPriority)
Metrics

Record relevant metrics for monitoring:

timer := r.metrics.NewReconcileTimer(controller, namespace, name, phase)
defer timer.Finish()

r.metrics.RecordReconcileResult(controller, "success")
r.metrics.RecordBackoffDelay(controller, errorType, strategy, delay)

Testing

Unit Tests

Test individual components:

go test ./pkg/controllers/optimized/
Integration Tests

Test with real Kubernetes API:

go test -tags=integration ./pkg/controllers/optimized/
Performance Tests

Run benchmarks to validate improvements:

go test -bench=. -benchmem -cpuprofile=cpu.prof -memprofile=mem.prof
Load Testing

Test with high concurrency:

go test -bench=ConcurrentReconcile -benchtime=30s

Troubleshooting

High Memory Usage
  • Check object pool efficiency
  • Verify cache cleanup is working
  • Monitor goroutine counts
High Latency
  • Check backoff configuration
  • Verify batch settings
  • Monitor API call distribution
High Error Rates
  • Review error classification
  • Check backoff strategies
  • Validate resource availability

Future Enhancements

  • Leader Election Optimization: Reduce leader election overhead
  • Watch Caching: Intelligent watch event filtering
  • Predictive Scaling: ML-based reconcile frequency prediction
  • Cross-Controller Batching: Shared batching across controllers
  • Dynamic Configuration: Runtime configuration updates

Documentation

Index

Constants

View Source
const (
	OptimizedE2NodeSetController = "optimized-e2nodeset"

	E2NodeSetFinalizer = "nephoran.com/e2nodeset-finalizer"

	E2NodeSetLabelKey = "nephoran.com/e2-nodeset"

	E2NodeAppLabelKey = "app"

	E2NodeAppLabelValue = "e2-node-simulator"

	E2NodeIDLabelKey = "nephoran.com/node-id"

	E2NodeIndexLabelKey = "nephoran.com/node-index"
)
View Source
const (
	OptimizedNetworkIntentController = "optimized-networkintent"

	NetworkIntentFinalizer = "networkintent.nephoran.com/finalizer"
)

Variables

View Source
var DefaultAPIBatchConfig = APIBatchConfig{
	MaxBatchSize: 5,

	BatchTimeout: 500 * time.Millisecond,

	FlushInterval: 1 * time.Second,

	MaxQueueSize: 500,

	EnablePriority: true,

	ParallelBatches: 3,
}
View Source
var DefaultBackoffConfigs = map[ErrorType]BackoffConfig{
	TransientError: {
		Strategy: ExponentialBackoff,

		BaseDelay: 2 * time.Second,

		MaxDelay: 5 * time.Minute,

		Multiplier: 2.0,

		JitterEnabled: true,

		MaxRetries: 5,
	},

	PermanentError: {
		Strategy: ConstantBackoff,

		BaseDelay: 30 * time.Second,

		MaxDelay: 30 * time.Second,

		Multiplier: 1.0,

		JitterEnabled: false,

		MaxRetries: 2,
	},

	ResourceError: {
		Strategy: LinearBackoff,

		BaseDelay: 5 * time.Second,

		MaxDelay: 2 * time.Minute,

		Multiplier: 1.5,

		JitterEnabled: true,

		MaxRetries: 4,
	},

	ThrottlingError: {
		Strategy: ExponentialBackoff,

		BaseDelay: 10 * time.Second,

		MaxDelay: 10 * time.Minute,

		Multiplier: 2.5,

		JitterEnabled: true,

		MaxRetries: 3,
	},

	ValidationError: {
		Strategy: ConstantBackoff,

		BaseDelay: 15 * time.Second,

		MaxDelay: 15 * time.Second,

		Multiplier: 1.0,

		JitterEnabled: false,

		MaxRetries: 1,
	},
}
View Source
var DefaultBatchConfig = BatchConfig{
	MaxBatchSize: 10,

	BatchTimeout: 2 * time.Second,

	FlushInterval: 5 * time.Second,

	MaxRetries: 3,

	RetryDelay: 1 * time.Second,

	EnablePriority: true,

	MaxQueueSize: 1000,
}

Functions

This section is empty.

Types

type APIBatchConfig

type APIBatchConfig struct {
	MaxBatchSize int // Maximum number of calls per batch

	BatchTimeout time.Duration // Maximum time to wait for a batch

	FlushInterval time.Duration // Periodic flush interval

	MaxQueueSize int // Maximum queue size

	EnablePriority bool // Whether to prioritize calls

	ParallelBatches int // Number of parallel batch processors
}

type APIBatcherStats

type APIBatcherStats struct {
	QueueSize int `json:"queue_size"`

	BatchesProcessed int64 `json:"batches_processed"`

	CallsProcessed int64 `json:"calls_processed"`

	CallsDropped int64 `json:"calls_dropped"`

	CallsFailed int64 `json:"calls_failed"`
}

type APICall

type APICall struct {
	Type APICallType

	Object client.Object

	Key types.NamespacedName

	Options []client.GetOption

	ListOptions []client.ListOption

	PatchOptions []client.PatchOption

	CreateOptions []client.CreateOption

	UpdateOptions []client.UpdateOption

	DeleteOptions []client.DeleteOption

	Patch client.Patch

	ResultChannel chan APICallResult

	Timestamp time.Time

	Priority UpdatePriority
}

type APICallBatcher

type APICallBatcher struct {
	// contains filtered or unexported fields
}

func NewAPICallBatcher

func NewAPICallBatcher(client client.Client, config APIBatchConfig) *APICallBatcher

func (*APICallBatcher) Create

func (ab *APICallBatcher) Create(ctx context.Context, obj client.Object, opts ...client.CreateOption) error

func (*APICallBatcher) Delete

func (ab *APICallBatcher) Delete(ctx context.Context, obj client.Object, opts ...client.DeleteOption) error

func (*APICallBatcher) Flush

func (ab *APICallBatcher) Flush() error

func (*APICallBatcher) Get

func (*APICallBatcher) GetStats

func (ab *APICallBatcher) GetStats() APIBatcherStats

func (*APICallBatcher) Patch

func (ab *APICallBatcher) Patch(ctx context.Context, obj client.Object, patch client.Patch, opts ...client.PatchOption) error

func (*APICallBatcher) Stop

func (ab *APICallBatcher) Stop() error

func (*APICallBatcher) Update

func (ab *APICallBatcher) Update(ctx context.Context, obj client.Object, opts ...client.UpdateOption) error

type APICallResult

type APICallResult struct {
	Object client.Object

	Error error
}

type APICallTimer

type APICallTimer struct {
	// contains filtered or unexported fields
}

func (*APICallTimer) FinishWithResult

func (act *APICallTimer) FinishWithResult(success bool, errorType string)

type APICallType

type APICallType string
const (
	GetCall APICallType = "get"

	ListCall APICallType = "list"

	CreateCall APICallType = "create"

	UpdateCall APICallType = "update"

	DeleteCall APICallType = "delete"

	PatchCall APICallType = "patch"
)

type BackoffConfig

type BackoffConfig struct {
	Strategy BackoffStrategy

	BaseDelay time.Duration

	MaxDelay time.Duration

	Multiplier float64

	JitterEnabled bool

	MaxRetries int
}

type BackoffEntry

type BackoffEntry struct {
	RetryCount int

	LastAttempt time.Time

	CurrentDelay time.Duration

	ErrorType ErrorType

	ConsecutiveFails int
}

type BackoffManager

type BackoffManager struct {
	// contains filtered or unexported fields
}

func NewBackoffManager

func NewBackoffManager() *BackoffManager

func (*BackoffManager) ClassifyError

func (bm *BackoffManager) ClassifyError(err error) ErrorType

func (*BackoffManager) CleanupStaleEntries

func (bm *BackoffManager) CleanupStaleEntries(ctx context.Context, maxAge time.Duration)

func (*BackoffManager) GetBackoffStats

func (bm *BackoffManager) GetBackoffStats() BackoffStats

func (*BackoffManager) GetNextDelay

func (bm *BackoffManager) GetNextDelay(resourceKey string, errorType ErrorType, err error) time.Duration

func (*BackoffManager) GetRetryCount

func (bm *BackoffManager) GetRetryCount(resourceKey string) int

func (*BackoffManager) RecordSuccess

func (bm *BackoffManager) RecordSuccess(resourceKey string)

func (*BackoffManager) SetConfig

func (bm *BackoffManager) SetConfig(errorType ErrorType, config BackoffConfig)

func (*BackoffManager) ShouldRetry

func (bm *BackoffManager) ShouldRetry(resourceKey string, errorType ErrorType) bool

type BackoffStats

type BackoffStats struct {
	TotalEntries int `json:"total_entries"`

	ErrorTypes map[ErrorType]int `json:"error_types"`

	RetryRanges map[string]int `json:"retry_ranges"`
}

type BackoffStrategy

type BackoffStrategy string
const (
	ExponentialBackoff BackoffStrategy = "exponential"

	LinearBackoff BackoffStrategy = "linear"

	ConstantBackoff BackoffStrategy = "constant"
)

func (BackoffStrategy) String

func (b BackoffStrategy) String() string

type BatchConfig

type BatchConfig struct {
	MaxBatchSize int // Maximum number of updates per batch

	BatchTimeout time.Duration // Maximum time to wait for a batch

	FlushInterval time.Duration // Periodic flush interval

	MaxRetries int // Maximum retries per update

	RetryDelay time.Duration // Base delay between retries

	EnablePriority bool // Whether to prioritize updates

	MaxQueueSize int // Maximum queue size before dropping updates
}

type ControllerMetrics

type ControllerMetrics struct {
	ReconcileDuration *prometheus.HistogramVec

	ReconcileTotal *prometheus.CounterVec

	ReconcileErrors *prometheus.CounterVec

	ReconcileRequeue *prometheus.CounterVec

	BackoffDelay *prometheus.HistogramVec

	BackoffRetries *prometheus.HistogramVec

	BackoffResets *prometheus.CounterVec

	StatusBatchSize *prometheus.HistogramVec

	StatusBatchDuration *prometheus.HistogramVec

	StatusUpdatesQueued *prometheus.CounterVec

	StatusUpdatesDropped *prometheus.CounterVec

	StatusUpdatesFailed *prometheus.CounterVec

	StatusQueueSize *prometheus.GaugeVec

	APICallDuration *prometheus.HistogramVec

	APICallTotal *prometheus.CounterVec

	APICallErrors *prometheus.CounterVec

	ActiveReconcilers *prometheus.GaugeVec

	MemoryUsage *prometheus.GaugeVec

	GoroutineCount *prometheus.GaugeVec
}

func NewControllerMetrics

func NewControllerMetrics() *ControllerMetrics

func (*ControllerMetrics) NewAPICallTimer

func (cm *ControllerMetrics) NewAPICallTimer(controller, operation, resource string) *APICallTimer

func (*ControllerMetrics) NewReconcileTimer

func (cm *ControllerMetrics) NewReconcileTimer(controller, namespace, name, phase string) *ReconcileTimer

func (*ControllerMetrics) RecordBackoffDelay

func (cm *ControllerMetrics) RecordBackoffDelay(controller string, errorType ErrorType, strategy BackoffStrategy, delay time.Duration)

func (*ControllerMetrics) RecordBackoffReset

func (cm *ControllerMetrics) RecordBackoffReset(controller, resourceType string)

func (*ControllerMetrics) RecordBackoffRetries

func (cm *ControllerMetrics) RecordBackoffRetries(controller string, errorType ErrorType, retries int, success bool)

func (*ControllerMetrics) RecordReconcileError

func (cm *ControllerMetrics) RecordReconcileError(controller string, errorType ErrorType, category string)

func (*ControllerMetrics) RecordReconcileResult

func (cm *ControllerMetrics) RecordReconcileResult(controller, result string)

func (*ControllerMetrics) RecordRequeue

func (cm *ControllerMetrics) RecordRequeue(controller, requeueType string, strategy BackoffStrategy)

func (*ControllerMetrics) RecordStatusBatch

func (cm *ControllerMetrics) RecordStatusBatch(controller string, size int, duration time.Duration, priority string)

func (*ControllerMetrics) RecordStatusUpdate

func (cm *ControllerMetrics) RecordStatusUpdate(controller, priority, resourceType, outcome string)

func (*ControllerMetrics) UpdateActiveReconcilers

func (cm *ControllerMetrics) UpdateActiveReconcilers(controller string, count int)

func (*ControllerMetrics) UpdateGoroutineCount

func (cm *ControllerMetrics) UpdateGoroutineCount(controller string, count int)

func (*ControllerMetrics) UpdateMemoryUsage

func (cm *ControllerMetrics) UpdateMemoryUsage(controller, memType string, bytes int64)

func (*ControllerMetrics) UpdateStatusQueueSize

func (cm *ControllerMetrics) UpdateStatusQueueSize(controller string, size int)

type E2NodeSetReconcileContext

type E2NodeSetReconcileContext struct {
	StartTime time.Time

	E2NodeSet *nephoranv1.E2NodeSet

	ExistingConfigMaps []*corev1.ConfigMap

	ProcessingMetrics map[string]float64

	NodesCreated int

	NodesUpdated int

	NodesDeleted int

	ErrorCount int
}

func (*E2NodeSetReconcileContext) Reset

func (ctx *E2NodeSetReconcileContext) Reset()

type ErrorType

type ErrorType string
const (
	TransientError ErrorType = "transient" // Network timeouts, temporary unavailability

	PermanentError ErrorType = "permanent" // Invalid configuration, auth failures

	ResourceError ErrorType = "resource" // Resource conflicts, quota exceeded

	ThrottlingError ErrorType = "throttling" // API rate limiting

	ValidationError ErrorType = "validation" // Schema validation failures

)

func (ErrorType) String

func (e ErrorType) String() string

type OptimizedE2NodeSetReconciler

type OptimizedE2NodeSetReconciler struct {
	client.Client

	Scheme *runtime.Scheme

	Recorder record.EventRecorder
	// contains filtered or unexported fields
}

func NewOptimizedE2NodeSetReconciler

func NewOptimizedE2NodeSetReconciler(
	client client.Client,

	scheme *runtime.Scheme,

	recorder record.EventRecorder,
) *OptimizedE2NodeSetReconciler

func (*OptimizedE2NodeSetReconciler) Reconcile

func (*OptimizedE2NodeSetReconciler) SetupWithManager

func (r *OptimizedE2NodeSetReconciler) SetupWithManager(mgr ctrl.Manager) error

func (*OptimizedE2NodeSetReconciler) Shutdown

func (r *OptimizedE2NodeSetReconciler) Shutdown() error

type OptimizedNetworkIntentReconciler

type OptimizedNetworkIntentReconciler struct {
	client.Client

	Scheme *runtime.Scheme

	Recorder record.EventRecorder
	// contains filtered or unexported fields
}

func NewOptimizedNetworkIntentReconciler

func NewOptimizedNetworkIntentReconciler(
	client client.Client,

	scheme *runtime.Scheme,

	recorder record.EventRecorder,

	config controllers.Config,

	deps controllers.Dependencies,
) *OptimizedNetworkIntentReconciler

func (*OptimizedNetworkIntentReconciler) Reconcile

func (*OptimizedNetworkIntentReconciler) SetupWithManager

func (r *OptimizedNetworkIntentReconciler) SetupWithManager(mgr ctrl.Manager) error

func (*OptimizedNetworkIntentReconciler) Shutdown

type ReconcileContext

type ReconcileContext struct {
	StartTime time.Time

	NetworkIntent *nephoranv1.NetworkIntent

	ProcessingPhase string

	ProcessingMetrics map[string]float64

	ResourcePlan map[string]interface{}

	Manifests map[string]string

	ErrorCount int

	LastError error
}

func (*ReconcileContext) Reset

func (rc *ReconcileContext) Reset()

type ReconcileTimer

type ReconcileTimer struct {
	// contains filtered or unexported fields
}

func (*ReconcileTimer) Finish

func (rt *ReconcileTimer) Finish()

type StatusBatcher

type StatusBatcher struct {
	// contains filtered or unexported fields
}

func NewStatusBatcher

func NewStatusBatcher(client client.Client, config BatchConfig) *StatusBatcher

func (*StatusBatcher) Flush

func (sb *StatusBatcher) Flush() error

func (*StatusBatcher) GetStats

func (sb *StatusBatcher) GetStats() StatusBatcherStats

func (*StatusBatcher) QueueE2NodeSetUpdate

func (sb *StatusBatcher) QueueE2NodeSetUpdate(namespacedName types.NamespacedName, readyReplicas, totalReplicas int32, conditions []metav1.Condition, priority UpdatePriority) error

func (*StatusBatcher) QueueNetworkIntentUpdate

func (sb *StatusBatcher) QueueNetworkIntentUpdate(namespacedName types.NamespacedName, conditionUpdates []metav1.Condition, phase string, priority UpdatePriority) error

func (*StatusBatcher) QueueUpdate

func (sb *StatusBatcher) QueueUpdate(namespacedName types.NamespacedName, updateFunc func(obj client.Object) error, priority UpdatePriority) error

func (*StatusBatcher) Stop

func (sb *StatusBatcher) Stop() error

type StatusBatcherStats

type StatusBatcherStats struct {
	QueueSize int `json:"queue_size"`

	BatchesProcessed int64 `json:"batches_processed"`

	UpdatesProcessed int64 `json:"updates_processed"`

	UpdatesDropped int64 `json:"updates_dropped"`

	UpdatesFailed int64 `json:"updates_failed"`

	AverageBatchSize float64 `json:"average_batch_size"`
}

type StatusUpdate

type StatusUpdate struct {
	NamespacedName types.NamespacedName

	UpdateFunc func(obj client.Object) error

	Priority UpdatePriority

	Timestamp time.Time

	RetryCount int
}

type UpdatePriority

type UpdatePriority int
const (
	LowPriority UpdatePriority = iota

	MediumPriority

	HighPriority

	CriticalPriority
)

func (UpdatePriority) String

func (p UpdatePriority) String() string

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL