k8s

package
v0.7.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 15, 2026 License: Apache-2.0 Imports: 20 Imported by: 0

Documentation

Overview

Package k8s provides Kubernetes integration for Virtual MCP Server dynamic mode.

Package k8s provides Kubernetes integration for Virtual MCP Server dynamic mode.

In dynamic mode (outgoingAuth.source: discovered), the vMCP server runs a controller-runtime manager with informers to watch K8s resources dynamically. This enables backends to be added/removed from the MCPGroup without restarting.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type BackendReconciler added in v0.7.0

type BackendReconciler struct {
	client.Client

	// Namespace is the namespace to watch for resources (matches BackendWatcher)
	Namespace string

	// GroupRef is the MCPGroup name to filter workloads (format: "group-name")
	GroupRef string

	// Registry is the DynamicRegistry to update when backends change
	Registry vmcp.DynamicRegistry

	// Discoverer converts K8s resources to vmcp.Backend (reuses existing code)
	Discoverer workloads.Discoverer
}

BackendReconciler watches MCPServers and MCPRemoteProxies, converting them to vmcp.Backend and updating the DynamicRegistry when backends change.

This reconciler is specifically designed for vMCP dynamic mode where backends can be added/removed without restarting the vMCP server. It filters backends by groupRef to only process workloads belonging to the configured MCPGroup.

Namespace Scoping:

  • Each BackendWatcher (and its reconciler) is scoped to a SINGLE namespace
  • The controller-runtime manager is configured with DefaultNamespaces (single namespace)
  • Backend IDs use name-only format (no namespace prefix) because namespace collisions are impossible
  • This matches how the discoverer stores backends (ID = resource.Name)

Design Philosophy:

  • Reuses existing conversion logic from workloads.Discoverer.GetWorkloadAsVMCPBackend()
  • Filters workloads by groupRef before conversion (security + performance)
  • Handles both MCPServer and MCPRemoteProxy resources
  • Updates DynamicRegistry which triggers version-based cache invalidation
  • Watches ExternalAuthConfig for auth changes (critical security path)
  • Does NOT watch Secrets directly (performance optimization)

Reconciliation Flow:

  1. Fetch resource (try MCPServer, then MCPRemoteProxy)
  2. If not found (deleted) → Remove from registry
  3. If groupRef doesn't match → Remove from registry (moved to different group)
  4. Convert to vmcp.Backend using discoverer
  5. If conversion fails or returns nil (auth failed) → Remove from registry
  6. Upsert backend to registry (triggers version increment + cache invalidation)

func (*BackendReconciler) Reconcile added in v0.7.0

func (r *BackendReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error)

Reconcile handles MCPServer and MCPRemoteProxy events, updating the DynamicRegistry.

This method is called by controller-runtime whenever:

  • A watched resource (MCPServer, MCPRemoteProxy, ExternalAuthConfig) changes
  • An event handler maps a resource change to this reconcile request

The reconciler filters by groupRef to only process backends belonging to the configured MCPGroup, ensuring security isolation between vMCP servers.

Returns:

  • ctrl.Result{}, nil: Reconciliation succeeded, no requeue needed
  • ctrl.Result{}, err: Reconciliation failed, controller-runtime will requeue

func (*BackendReconciler) SetupWithManager added in v0.7.0

func (r *BackendReconciler) SetupWithManager(mgr ctrl.Manager) error

SetupWithManager registers the BackendReconciler with the controller manager.

This method configures the reconciler to watch:

  • MCPServers (secondary watch via Watches() with groupRef filtering)
  • MCPRemoteProxies (mapped via event handler with groupRef filter)
  • MCPExternalAuthConfigs (mapped to servers/proxies that reference them)

Note: We use Watches() instead of For() for MCPServer because MCPServerReconciler is already the primary controller. Using For() in multiple controllers causes reconciliation conflicts and race conditions.

The reconciler does NOT watch Secrets directly for performance reasons. Secrets change frequently for unrelated reasons (TLS certs, app configs, etc.). Auth updates will trigger via ExternalAuthConfig changes or pod restarts.

Watch Design:

  1. Watches(&MCPServer{}) - Secondary watch with groupRef filter
  2. Watches(&MCPRemoteProxy{}) - Secondary watch with groupRef filter
  3. Watches(&ExternalAuthConfig{}) - Maps to servers/proxies that reference it

All watches are scoped to the reconciler's namespace (configured in BackendWatcher).

type BackendWatcher

type BackendWatcher struct {
	// contains filtered or unexported fields
}

BackendWatcher wraps a controller-runtime manager for vMCP dynamic mode.

In K8s mode (outgoingAuth.source: discovered), this watcher runs informers that watch for backend changes in the referenced MCPGroup. When backends are added or removed, the watcher updates the DynamicRegistry which triggers cache invalidation via version-based lazy invalidation.

Design Philosophy:

  • Wraps controller-runtime manager for lifecycle management
  • Provides WaitForCacheSync for readiness probe gating
  • Graceful shutdown on context cancellation
  • Single responsibility: watch K8s resources and update registry

Static mode (CLI) skips this entirely - no controller-runtime, no informers.

func NewBackendWatcher

func NewBackendWatcher(
	cfg *rest.Config,
	namespace string,
	groupRef string,
	registry vmcp.DynamicRegistry,
) (*BackendWatcher, error)

NewBackendWatcher creates a new backend watcher for vMCP dynamic mode.

This initializes a controller-runtime manager configured to watch resources in the specified namespace. The watcher will monitor the referenced MCPGroup and update the DynamicRegistry when backends are added or removed.

Parameters:

  • cfg: Kubernetes REST config (typically from in-cluster config)
  • namespace: Namespace to watch for resources
  • groupRef: MCPGroup reference in "namespace/name" format
  • registry: DynamicRegistry to update when backends change

Returns:

  • *BackendWatcher: Configured watcher ready to Start()
  • error: Configuration or initialization errors

Example:

restConfig, _ := rest.InClusterConfig()
registry := vmcp.NewDynamicRegistry(initialBackends)
watcher, err := k8s.NewBackendWatcher(restConfig, "default", "default/my-group", registry)
if err != nil {
    return err
}
go watcher.Start(ctx)
if !watcher.WaitForCacheSync(ctx) {
    return fmt.Errorf("cache sync failed")
}

func (*BackendWatcher) Start

func (w *BackendWatcher) Start(ctx context.Context) error

Start starts the controller-runtime manager and blocks until context is cancelled.

This method runs informers that watch for backend changes in the MCPGroup. It's designed to run in a background goroutine and will gracefully shutdown when the context is cancelled.

Design Notes:

  • Blocks until context cancellation (controller-runtime pattern)
  • Graceful shutdown on context cancel
  • Safe to call only once (subsequent calls will error)

Example:

go func() {
    if err := watcher.Start(ctx); err != nil {
        logger.Errorf("BackendWatcher stopped with error: %v", err)
    }
}()

func (*BackendWatcher) WaitForCacheSync

func (w *BackendWatcher) WaitForCacheSync(ctx context.Context) bool

WaitForCacheSync waits for the watcher's informer caches to sync.

This is used by the /readyz endpoint to gate readiness until the watcher has populated its caches. This ensures the vMCP server doesn't serve requests until it has an accurate view of backends.

Parameters:

  • ctx: Context with optional timeout for the wait operation

Returns:

  • bool: true if caches synced successfully, false on timeout or error

Design Notes:

  • Non-blocking if watcher not started (returns false)
  • Respects context timeout (e.g., 5-second readiness probe timeout)
  • Safe to call multiple times (idempotent)

Example (readiness probe):

func (s *Server) handleReadiness(w http.ResponseWriter, r *http.Request) {
    if s.backendWatcher != nil {
        ctx, cancel := context.WithTimeout(r.Context(), 5*time.Second)
        defer cancel()
        if !s.backendWatcher.WaitForCacheSync(ctx) {
            w.WriteHeader(http.StatusServiceUnavailable)
            return
        }
    }
    w.WriteHeader(http.StatusOK)
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL