Documentation
¶
Overview ¶
Package k8s provides Kubernetes integration for Virtual MCP Server dynamic mode.
Package k8s provides Kubernetes integration for Virtual MCP Server dynamic mode.
In dynamic mode (outgoingAuth.source: discovered), the vMCP server runs a controller-runtime manager with informers to watch K8s resources dynamically. This enables backends to be added/removed from the MCPGroup without restarting.
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type BackendReconciler ¶ added in v0.7.0
type BackendReconciler struct {
client.Client
// Namespace is the namespace to watch for resources (matches BackendWatcher)
Namespace string
// GroupRef is the MCPGroup name to filter workloads (format: "group-name")
GroupRef string
// Registry is the DynamicRegistry to update when backends change
Registry vmcp.DynamicRegistry
// Discoverer converts K8s resources to vmcp.Backend (reuses existing code)
Discoverer workloads.Discoverer
}
BackendReconciler watches MCPServers and MCPRemoteProxies, converting them to vmcp.Backend and updating the DynamicRegistry when backends change.
This reconciler is specifically designed for vMCP dynamic mode where backends can be added/removed without restarting the vMCP server. It filters backends by groupRef to only process workloads belonging to the configured MCPGroup.
Namespace Scoping:
- Each BackendWatcher (and its reconciler) is scoped to a SINGLE namespace
- The controller-runtime manager is configured with DefaultNamespaces (single namespace)
- Backend IDs use name-only format (no namespace prefix) because namespace collisions are impossible
- This matches how the discoverer stores backends (ID = resource.Name)
Design Philosophy:
- Reuses existing conversion logic from workloads.Discoverer.GetWorkloadAsVMCPBackend()
- Filters workloads by groupRef before conversion (security + performance)
- Handles both MCPServer and MCPRemoteProxy resources
- Updates DynamicRegistry which triggers version-based cache invalidation
- Watches ExternalAuthConfig for auth changes (critical security path)
- Does NOT watch Secrets directly (performance optimization)
Reconciliation Flow:
- Fetch resource (try MCPServer, then MCPRemoteProxy)
- If not found (deleted) → Remove from registry
- If groupRef doesn't match → Remove from registry (moved to different group)
- Convert to vmcp.Backend using discoverer
- If conversion fails or returns nil (auth failed) → Remove from registry
- Upsert backend to registry (triggers version increment + cache invalidation)
func (*BackendReconciler) Reconcile ¶ added in v0.7.0
Reconcile handles MCPServer and MCPRemoteProxy events, updating the DynamicRegistry.
This method is called by controller-runtime whenever:
- A watched resource (MCPServer, MCPRemoteProxy, ExternalAuthConfig) changes
- An event handler maps a resource change to this reconcile request
The reconciler filters by groupRef to only process backends belonging to the configured MCPGroup, ensuring security isolation between vMCP servers.
Returns:
- ctrl.Result{}, nil: Reconciliation succeeded, no requeue needed
- ctrl.Result{}, err: Reconciliation failed, controller-runtime will requeue
func (*BackendReconciler) SetupWithManager ¶ added in v0.7.0
func (r *BackendReconciler) SetupWithManager(mgr ctrl.Manager) error
SetupWithManager registers the BackendReconciler with the controller manager.
This method configures the reconciler to watch:
- MCPServers (secondary watch via Watches() with groupRef filtering)
- MCPRemoteProxies (mapped via event handler with groupRef filter)
- MCPExternalAuthConfigs (mapped to servers/proxies that reference them)
Note: We use Watches() instead of For() for MCPServer because MCPServerReconciler is already the primary controller. Using For() in multiple controllers causes reconciliation conflicts and race conditions.
The reconciler does NOT watch Secrets directly for performance reasons. Secrets change frequently for unrelated reasons (TLS certs, app configs, etc.). Auth updates will trigger via ExternalAuthConfig changes or pod restarts.
Watch Design:
- Watches(&MCPServer{}) - Secondary watch with groupRef filter
- Watches(&MCPRemoteProxy{}) - Secondary watch with groupRef filter
- Watches(&ExternalAuthConfig{}) - Maps to servers/proxies that reference it
All watches are scoped to the reconciler's namespace (configured in BackendWatcher).
type BackendWatcher ¶
type BackendWatcher struct {
// contains filtered or unexported fields
}
BackendWatcher wraps a controller-runtime manager for vMCP dynamic mode.
In K8s mode (outgoingAuth.source: discovered), this watcher runs informers that watch for backend changes in the referenced MCPGroup. When backends are added or removed, the watcher updates the DynamicRegistry which triggers cache invalidation via version-based lazy invalidation.
Design Philosophy:
- Wraps controller-runtime manager for lifecycle management
- Provides WaitForCacheSync for readiness probe gating
- Graceful shutdown on context cancellation
- Single responsibility: watch K8s resources and update registry
Static mode (CLI) skips this entirely - no controller-runtime, no informers.
func NewBackendWatcher ¶
func NewBackendWatcher( cfg *rest.Config, namespace string, groupRef string, registry vmcp.DynamicRegistry, ) (*BackendWatcher, error)
NewBackendWatcher creates a new backend watcher for vMCP dynamic mode.
This initializes a controller-runtime manager configured to watch resources in the specified namespace. The watcher will monitor the referenced MCPGroup and update the DynamicRegistry when backends are added or removed.
Parameters:
- cfg: Kubernetes REST config (typically from in-cluster config)
- namespace: Namespace to watch for resources
- groupRef: MCPGroup reference in "namespace/name" format
- registry: DynamicRegistry to update when backends change
Returns:
- *BackendWatcher: Configured watcher ready to Start()
- error: Configuration or initialization errors
Example:
restConfig, _ := rest.InClusterConfig()
registry := vmcp.NewDynamicRegistry(initialBackends)
watcher, err := k8s.NewBackendWatcher(restConfig, "default", "default/my-group", registry)
if err != nil {
return err
}
go watcher.Start(ctx)
if !watcher.WaitForCacheSync(ctx) {
return fmt.Errorf("cache sync failed")
}
func (*BackendWatcher) Start ¶
func (w *BackendWatcher) Start(ctx context.Context) error
Start starts the controller-runtime manager and blocks until context is cancelled.
This method runs informers that watch for backend changes in the MCPGroup. It's designed to run in a background goroutine and will gracefully shutdown when the context is cancelled.
Design Notes:
- Blocks until context cancellation (controller-runtime pattern)
- Graceful shutdown on context cancel
- Safe to call only once (subsequent calls will error)
Example:
go func() {
if err := watcher.Start(ctx); err != nil {
logger.Errorf("BackendWatcher stopped with error: %v", err)
}
}()
func (*BackendWatcher) WaitForCacheSync ¶
func (w *BackendWatcher) WaitForCacheSync(ctx context.Context) bool
WaitForCacheSync waits for the watcher's informer caches to sync.
This is used by the /readyz endpoint to gate readiness until the watcher has populated its caches. This ensures the vMCP server doesn't serve requests until it has an accurate view of backends.
Parameters:
- ctx: Context with optional timeout for the wait operation
Returns:
- bool: true if caches synced successfully, false on timeout or error
Design Notes:
- Non-blocking if watcher not started (returns false)
- Respects context timeout (e.g., 5-second readiness probe timeout)
- Safe to call multiple times (idempotent)
Example (readiness probe):
func (s *Server) handleReadiness(w http.ResponseWriter, r *http.Request) {
if s.backendWatcher != nil {
ctx, cancel := context.WithTimeout(r.Context(), 5*time.Second)
defer cancel()
if !s.backendWatcher.WaitForCacheSync(ctx) {
w.WriteHeader(http.StatusServiceUnavailable)
return
}
}
w.WriteHeader(http.StatusOK)
}