service

package
v0.11.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 8, 2026 License: Apache-2.0 Imports: 48 Imported by: 0

Documentation

Index

Constants

View Source
const (
	// Default Indicates that if any hetzner node the proxy will be used.
	ProxyDefaultMode = "default"

	// Explicitly turn off the proxy.
	ProxyOffMode = "off"

	// Explicitly turn on the proxy.
	ProxyOnMode = "on"
)

Known Proxy modes as defined by the InputManifest spec.

View Source
const (
	// PendingTick represents the interval at which each manifest state is checked
	// while in the [manifest.Pending] state.
	PendingTick = 20 * time.Second

	// Tick represents the interval at which each manifest state is checked while
	// in the [manifest.Pending] state.
	Tick = 1 * time.Second
)
View Source
const HostnameHashLength = 17

Length of the hash that is used for generating hostnames for nodepools that do not have one assigned in the parsed manifest.Manifest.

View Source
const PreviouslyCachedWorkflowResults = 3

Variables

View Source
var (
	// Port on which the grpc server will be listening on.
	Port = envs.GetOrDefaultInt("MANAGER_PORT", 50055)

	// Durable name of this service.
	DurableName = envs.GetOrDefault("MANAGER_DURABLE_NAME", "manager")

	// Name used for health checking via the grpc health check.
	HealthCheckReadinessName = envs.GetOrDefault("MANAGER_HEALTHCHECK_READINESS_SERVICE_NAME", "manager-readiness")
	HealthCheckLivenessName  = envs.GetOrDefault("MANAGER_HEALTHCHECK_LIVENESS_SERVICE_NAME", "manager-liveness")

	// Ack wait time in minutes for processing incoming NATS messages.
	AckWait = time.Duration(envs.GetOrDefaultInt("MANAGER_ACK_WAIT_TIME", 8)) * time.Minute

	// Ticks for infrastructure refresh are the number of Manager ticks that
	// are set after each scheduled task. This value is decrement every iteration
	// of the reconciliation loop below when no task has been identified.
	//
	// Once this value reaches zero, a refresh of the infrastructure will be scheduled.
	TicksForInfrastructureRefresh = int32(envs.GetOrDefaultInt("MANAGER_TICK_FOR_INFRA_REFRESH", 100))

	// Time after which if the Node's Ready status is still false will be
	// replaced by claudie.
	TimeForNodeDeletion = time.Duration(envs.GetOrDefaultInt("MANAGER_TIME_FOR_NODE_DELETION", 12)) * time.Minute
)

Functions

func CreateNodePoolForRollingUpdate added in v0.10.0

func CreateNodePoolForRollingUpdate(
	current *spec.Clusters,
	cid *LoadBalancerIdentifier,
	nodepool string,
	newTemplates *spec.TemplateRepository,
) (*spec.NodePool, error)

Creates a new nodepool from the 'nodepool' reference passed in 'current' state spec.Clusters. The newly returned nodepool does not share any memory with any of the passed in parameters and can be use as standalone.

func CreatePipeline added in v0.10.0

func CreatePipeline(clusters *spec.Clusters, isCreate bool) []*spec.Stage

CreatePipeline returns the pipeline for creating a cluster from scratch based on the passed in state.

func HandleKubernetesUnreachableNodes added in v0.10.0

func HandleKubernetesUnreachableNodes(logger zerolog.Logger, r KubernetesUnreachableNodes) (*spec.TaskEvent, error)

Based on the provided data via the HealthCheckStatus and [UnreachableNodes] determines what should be done to unblock the unreachable nodes, the decision is returned via the (*spec.TaskEvent, [error]) pair

If an error is returned it means that the task cannot be scheduled due to needing input from the user as to what should be done next with the unreachable state.

On no error there are two possible outcomes:

  • A task is scheduled which should be worked on next with a higher priority to resolve the unreachable nodes.
  • Nothing is returned (nil, nil), meaning that there is no task and also no error, the infrastructure is ok and reachable.

If an *spec.TaskEvent is scheduled it does not point to or share any memory with the two passed in states.

func HandleLoadBalancerUnreachableNodes added in v0.10.0

func HandleLoadBalancerUnreachableNodes(r LoadBalancerUnreachableNodes) (*spec.TaskEvent, error)

Similar as HandleKubernetesUnreachableNodes but works with the loadbalancer nodes.

func KubernetesDeletions added in v0.10.0

func KubernetesDeletions(r KubernetesReconciliate) *spec.TaskEvent

KubernetesDeletions returns kubernetes cluster changes can be done/executed after handling addition/modification changes to the kubernetes clusters, and deletions of loadbalancers. Assumes that both the current and desired spec.Clusters were not modified since the HealthCheckStatus and KubernetesDiffResult was computed, and that all of the Cached Indices within the KubernetesDiffResult are not invalidated. This function does not modify the input in any way and also the returned spec.TaskEvent does not hold or share any memory to related to the input.

func KubernetesLowPriority added in v0.10.0

func KubernetesLowPriority(r KubernetesReconciliate) *spec.TaskEvent

KubernetesLowPriority handles the very last low priority tasks that should be worked on, after all the other changes are done. Assumes that both the current and desired spec.Clusters were not modified since the HealthCheckStatus and KubernetesDiffResult was computed, and that all of the Cached Indices within the KubernetesDiffResult are not invalidated. This function does not modify the input in any way and also the returned spec.TaskEvent does not hold or share any memory to related to the input.

func KubernetesModifications added in v0.10.0

func KubernetesModifications(r KubernetesReconciliate) *spec.TaskEvent

KubernetesModifications returns kubernetes cluster changes that can be done/executed before handling any deletion changes of clusters either, kubernetes of loadbalancers. Assumes that both the current and desired spec.Clusters were not modified since the HealthCheckStatus and KubernetesDiffResult was computed, and that all of the Cached Indices within the KubernetesDiffResult are not invalidated. This function does not modify the input in any way and also the returned spec.TaskEvent does not hold or share any memory to related to the input.

func LoadBalancersLowPriority added in v0.10.0

func LoadBalancersLowPriority(r LoadBalancersReconciliate) *spec.TaskEvent

LoadBalancersLowPriority handles the very last low priority tasks that should be worked on, after all the other changes are done. Assumes that both the current and desired spec.Clusters were not modified since the HealthCheckStatus and LoadBalancersDiffResult was computed, and that all of the Cached Indices within the LoadBalancersDiffResult are not invalidated. This function does not modify the input in any way and also the returned spec.TaskEvent does not hold or share any memory to related to the input.

func NodePoolsView added in v0.10.0

func NodePoolsView(info *spec.ClusterInfo) (dynamic NodePoolsViewType, static NodePoolsViewType)

NodePoolsView returns a view into the individual dynamic and static NodePools of the passed in spec.K8Scluster struct and returns them as an NodePoolsViewType.

func PopulateDefaultHealthcheckRole added in v0.10.0

func PopulateDefaultHealthcheckRole(state *spec.Clusters)

For each spec.Clusters.Loadbalancers adds a role with the manifest.HealthcheckPort if it already is not present.

func PopulateDnsHostName added in v0.10.0

func PopulateDnsHostName(state *spec.Clusters)

PopulateDnsHostName generates a random spec.Dns.Hostname for all spec.Clusters.LoadBalancers that have their spec.Dns.Hostname empty. The generated hostname is a random sequence of length HostnameHashLength

func PopulateDynamicNodes added in v0.10.0

func PopulateDynamicNodes(clusterID string, nodepool *spec.NodePool)

Based on the `len(nodepool.Nodes)` and the spec.Dynamic_NodePool.Count generates missing Dynamic Node entries suchs that all nodes within that nodepool will have unique names assigned to them. If the passed in `nodepool` is not dynamic the function panics.

func PopulateDynamicNodesForClusters added in v0.10.0

func PopulateDynamicNodesForClusters(c *spec.Clusters)

Same as PopulateDynamicNodes but goes over all of the clusters in `c`.

func PopulateEntriesForNewClusters added in v0.10.0

func PopulateEntriesForNewClusters(
	current *map[string]*spec.ClusterState,
	desired map[string]*spec.Clusters,
)

func PopulateEnvoyAdminPorts added in v0.10.0

func PopulateEnvoyAdminPorts(state *spec.Clusters)

For each spec.Clusters.LoadBalancers generates a port for the envoy admin interface. The port is used from the claudie reserved range.

func PopulateSSHKeys added in v0.10.0

func PopulateSSHKeys(state *spec.Clusters) error

PopulateSSHKeys will generate SSH keypair for each nodepool that does not yet have a keypair assigned.

func PostKubernetesDiff added in v0.10.0

func PostKubernetesDiff(r LoadBalancersReconciliate) *spec.TaskEvent

PostKubernetesDiff returns load balancer changes can be done/executed after handling addition/modification changes to the kubernetes clusters. Assumes that both the current and desired spec.Clusters were not modified since the [HealthCheckStatue] and LoadBalancersDiffResult was computed, and that all of the Cached Indices within the LoadBalancersDiffResult are not invalidated. This function does not modify the input in any way and also the returned spec.TaskEvent does not hold or share any memory to related to the input.

func PreKubernetesDiff added in v0.10.0

func PreKubernetesDiff(r LoadBalancersReconciliate) *spec.TaskEvent

PreKubernetesDiff returns load balancer changes that can be done/executed before handling any changes to the kubernetes clusters. Assumes that both the current and desired spec.Clusters were not modified since the HealthCheckStatus and LoadBalancersDiffResult was computed, and that all of the Cached Indices within the LoadBalancersDiffResult are not invalidated. This function does not modify the input in any way and also the returned spec.TaskEvent does not hold or share any memory to related to the input.

func ProcessTask added in v0.10.0

func ProcessTask(ctx context.Context, stores Stores, work Work) (acknowledge bool)

Process the Message from the NATS message queue.

Note that NATS follows the at least once delivery, thus messages may be re-delivered could be due to ack failing or various reasons. The 'acknowledge' return parameter tells whether the message should be ack or not. If not the message will be 100% re-delivered again, otherwise the message will be ack from the message queue.

func ScheduleAddRoles added in v0.10.0

func ScheduleAddRoles(current, desired *spec.Clusters, cid, did LoadBalancerIdentifier, roles []string) *spec.TaskEvent

Adds the passed in roles from the loadbalancer with the id identified from the passed in lb string, from the desired spec.Clusters state into the current spec.Clusters state.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleAdditionLoadBalancerNodePools added in v0.10.0

func ScheduleAdditionLoadBalancerNodePools(
	current *spec.Clusters,
	desired *spec.Clusters,
	cid LoadBalancerIdentifier,
	did LoadBalancerIdentifier,
	diff *NodePoolsDiffResult,
	opts LoadBalancerNodePoolsOptions,
) *spec.TaskEvent

Schedules a task that will add new nodes/nodepools into the current state of the loadbalancer.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleAdditionsInNodePools added in v0.10.0

func ScheduleAdditionsInNodePools(
	current *spec.Clusters,
	desired *spec.Clusters,
	diff *NodePoolsDiffResult,
	opts K8sNodeAdditionOptions,
) *spec.TaskEvent

Schedules a task that will add new nodes/nodepools into the current state of the cluster.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleControlNodesPort6443 added in v0.10.0

func ScheduleControlNodesPort6443(current *spec.Clusters, open bool) *spec.TaskEvent

Configures the port 6443 manifest.APIServerPort on the control nodes of the spec.Clusters state. Based on the supplied value of open, the port is either opened or closed on all of the control nodes.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleCreateCluster added in v0.10.0

func ScheduleCreateCluster(desired *spec.Clusters) *spec.TaskEvent

Schedules a spec.TaskEvent task for creating the clusters in the passed in desired spec.Clusters.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleDeleteCluster added in v0.10.0

func ScheduleDeleteCluster(current *spec.Clusters) *spec.TaskEvent

Schedules a spec.TaskEvent task for deleting the clusters in the passed in current spec.Clusters.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleDeleteLoadBalancer added in v0.10.0

func ScheduleDeleteLoadBalancer(useProxy bool, current *spec.Clusters, cid LoadBalancerIdentifier) *spec.TaskEvent

Deletes the loadbalancer with the id specified in the passed in lb from the spec.Clusters state.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleDeleteRoles added in v0.10.0

func ScheduleDeleteRoles(current *spec.Clusters, cid LoadBalancerIdentifier, roles []string) *spec.TaskEvent

Removes the passed in roles from the loadbalancer with the id identified from the passed in lb string, from the current spec.Clusters state.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleDeletionLoadBalancerNodePools added in v0.10.0

func ScheduleDeletionLoadBalancerNodePools(
	current *spec.Clusters,
	cid LoadBalancerIdentifier,
	diff *NodePoolsDiffResult,
	opts LoadBalancerNodePoolsOptions,
) *spec.TaskEvent

Schedules a task that will remove nodes/nodepools from the current state of the loadbalancer.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleDeletionsInNodePools added in v0.10.0

func ScheduleDeletionsInNodePools(
	current *spec.Clusters,
	diff *NodePoolsDiffResult,
	opts K8sNodeDeletionOptions,
) *spec.TaskEvent

Schedules a task that will delete nodes/nodepools from the current state of the cluster.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleJoinLoadBalancer added in v0.10.0

func ScheduleJoinLoadBalancer(useProxy bool, current, desired *spec.Clusters, did LoadBalancerIdentifier) *spec.TaskEvent

Joins the loadbalancer with the id specified in the passed in lb from the desired spec.Clusters state into the existing current infrastructure of spec.Clusters.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleMoveApiEndpoint added in v0.10.0

func ScheduleMoveApiEndpoint(
	current *spec.Clusters,
	cid string,
	did string,
	change spec.ApiEndpointChangeState,
) *spec.TaskEvent

Moves the api endpoint of the kubernetes cluster from the source spec.Clusters state to the destination spec.Clusters state. The API endpoint is moved based on the provided spec.ApiEndpointChangeState. The supplied cid, and did which identify the ID's of the loadbalancers must be valid, if they're not the move may fail and yield a broken cluster. This function does not handle moving the api endpoint if the kuberentes cluster in spec.Clusters does not have Loadbalancers, i.e. only has kubernetes nodes.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleMoveNodePoolFromAutoscaled added in v0.10.0

func ScheduleMoveNodePoolFromAutoscaled(
	current *spec.Clusters,
	nodepool string,
) *spec.TaskEvent

Schedules a task that will move the nodepool from the current state of the cluster from being autoscaled to fixed.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleMoveNodePoolToAutoscaled added in v0.10.0

func ScheduleMoveNodePoolToAutoscaled(
	current *spec.Clusters,
	nodepool string,
	config *spec.AutoscalerConf,
) *spec.TaskEvent

Schedules a task that will move the nodepool from the current state of the cluster from a fixed size to autoscaled type.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func SchedulePatchNodes added in v0.10.0

func SchedulePatchNodes(current *spec.Clusters, diff LabelsTaintsAnnotationsDiffResult) *spec.TaskEvent

Schedules a task that will re-patch the nodes with the new `taints`,`annotations`,`labels` of the cluster.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleProxyOff added in v0.10.0

func ScheduleProxyOff(current, desired *spec.Clusters) *spec.TaskEvent

Schedules a task that will turn off the HttpProxy and NoProxy env variables across the nodes of the cluster.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleProxyOn added in v0.10.0

func ScheduleProxyOn(current, desired *spec.Clusters) *spec.TaskEvent

Schedules a task that will turn on the HttpProxy and NoProxy env variables across the nodes of the cluster.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleRawDeleteLoadBalancer added in v0.10.0

func ScheduleRawDeleteLoadBalancer(
	current *spec.Clusters,
	cid LoadBalancerIdentifier,
	unreachable *spec.Unreachable,
) *spec.TaskEvent

Deletes the loadbalancer with the id specified in the passed in lb from the spec.Clusters state. The deletion Only deletes the Infrastructure if any. Contrary to how the ScheduleDeleteLoadBalancer works this function will not run/execute any other stages as part of its delete pipeline. This task is useful for scenarios where the loadbalancer infrastructure is unresponsive and needs to be replaced. The ansibler stage will not be called at all, thus there should be other mechanisms in place to reconcile the ansible stage.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleReconcileRoleTargetPools added in v0.10.0

func ScheduleReconcileRoleTargetPools(
	current *spec.Clusters,
	desired *spec.Clusters,
	cid LoadBalancerIdentifier,
	did LoadBalancerIdentifier,
) *spec.TaskEvent

Reconciles the TargetPools in roles from the loadbalancer with the id identified from the passed in lb string, from the desired spec.Clusters state into the current spec.Clusters state.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleRefreshInfrastructure added in v0.10.0

func ScheduleRefreshInfrastructure(current *spec.Clusters) *spec.TaskEvent

func ScheduleRefreshLoadBalancerRoleExternalSettings added in v0.10.0

func ScheduleRefreshLoadBalancerRoleExternalSettings(
	current *spec.Clusters,
	desired *spec.Clusters,
	cid LoadBalancerIdentifier,
	did LoadBalancerIdentifier,
	role string,
) *spec.TaskEvent

Schedules the refresh of the loadbalancer after the role settings were adjusted.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleRefreshLoadBalancerRoleInternalSettings added in v0.10.0

func ScheduleRefreshLoadBalancerRoleInternalSettings(
	current *spec.Clusters,
	desired *spec.Clusters,
	cid LoadBalancerIdentifier,
	did LoadBalancerIdentifier,
	role string,
) *spec.TaskEvent

Schedules the refresh of the loadbalancer after the role settings were adjusted.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleRefreshVPN added in v0.10.0

func ScheduleRefreshVPN(usesProxy bool, current *spec.Clusters) *spec.TaskEvent

Schedules a spec.TaskEvent task for reconciling the VPN across the nodes of the clusters in the passed in spec.Clusters. If proxy is used within the cluster the proxy Envs will also be refreshed.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleReplaceDns added in v0.10.0

func ScheduleReplaceDns(
	useProxy bool,
	current *spec.Clusters,
	desired *spec.Clusters,
	cid LoadBalancerIdentifier,
	did LoadBalancerIdentifier,
	apiEndpoint bool,
) *spec.TaskEvent

Replaces the spec.DNS in the current state with the spec.DNS from the desired state. Based on additional provided information via the apiEndpoint boolean, the function will include in the scheduled task, steps to interpret the old spec.DNS to be the API endpoint and move it to the new spec.DNS.Endpoint.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleRollingUpdate added in v0.10.0

func ScheduleRollingUpdate(
	current *spec.Clusters,
	np string,
	templates *spec.TemplateRepository,
	opts K8sNodeAdditionOptions,
) *spec.TaskEvent

Schedules the addition of a new nodepool with the new templates

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleRollingUpdateLoadBalancer added in v0.10.0

func ScheduleRollingUpdateLoadBalancer(
	current *spec.Clusters,
	cid LoadBalancerIdentifier,
	np string,
	templates *spec.TemplateRepository,
	opts LoadBalancerNodePoolsOptions,
) *spec.TaskEvent

Schedules the addition of a new nodepool with the new templates

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleTransferApiEndpoint added in v0.10.0

func ScheduleTransferApiEndpoint(current *spec.Clusters, nodepool, node string) *spec.TaskEvent

Schedules a task that transfer the Api endpoint from the current node of the kubernetes cluster to the new desired node within the cluster

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func ScheduleUpgradeKubernetesVersion added in v0.10.0

func ScheduleUpgradeKubernetesVersion(current, desired *spec.Clusters) *spec.TaskEvent

Schedules a task that will update the kubernetes version to the new desired version of the cluster.

The returned spec.TaskEvent does not point to or share any memory with the two passed in states.

func UsesProxy added in v0.10.0

func UsesProxy(k8s *spec.K8Scluster) bool

Based on the values in the spec.K8Scluster.InstallationProxy of the provided state returns the proxy settings via a spec.Proxy and true if the proxy should be used within that spec.Clusters state. Otherwise returns false.

Types

type Annotations added in v0.10.0

type Annotations = map[string]string

type DiffResult added in v0.10.0

type DiffResult struct {
	Kubernetes    KubernetesDiffResult
	LoadBalancers LoadBalancersDiffResult
}

func Diff

func Diff(old, desired *spec.Clusters) DiffResult

type HealthCheckStatus added in v0.10.0

type HealthCheckStatus struct {
	// HealthCheck on the 6443 port, which is used for the API server
	// for the kubernetes cluster.
	ApiEndpoint struct {
		// Whether the API endpoint could be reached. The check is done
		// by trying to reach the cluster via `kubectl`for the [spec.K8Scluster.Kubeconfig].
		//
		// NOTE: This does not necessarily mean the cluster itself is down
		// it could be the case that the managemenet cluster has network issues
		// and the packets could not reach the cluster.
		Unreachable bool
	}

	Cluster struct {
		// Nodes, as returned by kubectl get nodes.
		//
		// Note that some of the fields may be unset as
		// the current state by claudie could be different
		// than the actual nodes inside the kubernetes cluster.
		Nodes map[string]*NodeDescription

		// Whether the Port 6443 is exported on the control nodes.
		ControlNodesHave6443 bool

		// Whether there is a drift in the number of wireguard peers.
		VpnDrift bool
	}
}

HealthCheckStatus report the status of the Infrastructure.

func HealthCheck added in v0.10.0

func HealthCheck(logger zerolog.Logger, state *spec.Clusters) HealthCheckStatus

HealthCheck performs healthcheck across the kubernetes cluster of the passed in spec.Clusters state.

type K8sNodeAdditionOptions added in v0.10.0

type K8sNodeAdditionOptions struct {
	UseProxy     bool
	HasApiServer bool
	IsStatic     bool
}

type K8sNodeDeletionOptions added in v0.10.0

type K8sNodeDeletionOptions struct {
	UseProxy     bool
	HasApiServer bool
	IsStatic     bool

	// Optional unreachable infrastructure that will
	// be passed along the scheduled deletion [spec.TaskEvent]
	Unreachable *spec.Unreachable
}

type KubernetesDiffResult added in v0.10.0

type KubernetesDiffResult struct {
	// Whether the kubernetes version changed.
	Version bool

	// Whether proxy settings changed.
	Proxy ProxyDiffResult

	// Diff in the Dynamic NodePools of the cluster.
	Dynamic NodePoolsDiffResult

	// Diff in the Static NodePools of the cluster.
	Static NodePoolsDiffResult

	// Nodepools to switch to autoscaling nodepools.
	ChangedToAutoscaled PendingAutoscaledNodePoolTransitions

	// Nodepools to switch to fixed nodepools.
	ChangedToFixed PendingFixedNodePoolTransitions

	// Dynamic nodes that are present in both the current and
	// desired state and are marked with [spec.NodeStatus_MarkedForDeletion]
	//
	// Reason why its included in the diff, is mostly that its a
	// pending diff to be handled at some point in the future.
	PendingDynamicDeletions PendingDeletionsViewType

	// Static nodes that are present in both the current and
	// desired state and are marked with [spec.NodeStatus_MarkedForDeletion]
	//
	// Reason why its included in the diff, is mostly that its a
	// pending diff to be handled at some point in the future.
	PendingStaticDeletions PendingDeletionsViewType

	// State of the Api endpoint for the kubernetes cluster.
	// If the kubernetes cluster has no api endpoint, but it
	// is in one of the loadbalancers attached to the cluster
	// both of the values will be empty.
	ApiEndpoint struct {
		// ID of the node which is used for the API server in the
		// current state. If there is no, the value will be empty.
		Current         string
		CurrentNodePool string

		// ID of the node which is used for the API server in the
		// desired state. If there is no, the value will be empty.
		Desired         string
		DesiredNodePool string
	}

	LabelsTaintsAnnotations LabelsTaintsAnnotationsDiffResult

	// RollingUpdates are nodepools present in both states but
	// having different templates commit hash.
	RollingUpdates PendingRollingUpdates
}

KubernetesDiffResult holds all of the changes between two different spec.K8Scluster

func KubernetesDiff added in v0.10.0

func KubernetesDiff(old, desired *spec.K8Scluster) KubernetesDiffResult

type KubernetesReconciliate added in v0.10.0

type KubernetesReconciliate struct {
	Hc      *HealthCheckStatus
	Diff    *KubernetesDiffResult
	Current *spec.Clusters
	Desired *spec.Clusters
}

Wraps data and diffs needed by the reconciliation for kubernetes cluster.

The Kubernetes reconciliation will only consider fixing drifts in the passed KubernetesDiffResult.

Others will only be used for guiding the decisions when scheduling the tasks and will not schedule tasks that will fix the drift in other Diff Results.

type KubernetesUnreachableNodes added in v0.10.0

type KubernetesUnreachableNodes struct {
	Hc         HealthCheckStatus
	NodeStatus UnknownNodeStatus
	Diff       *KubernetesDiffResult
	Current    *spec.Clusters
	Desired    *spec.Clusters
}

type Labels added in v0.10.0

type Labels = map[string]string

type LabelsTaintsAnnotationsDiffResult added in v0.10.0

type LabelsTaintsAnnotationsDiffResult struct {
	Deleted struct {
		LabelKeys       map[string][]string
		AnnotationsKeys map[string][]string
		TaintKeys       map[string][]*spec.Taint
	}
	Added struct {
		Labels      map[string]Labels
		Annotations map[string]Annotations
		Taints      map[string]Taints
	}
}

type LoadBalancerIdentifier added in v0.10.0

type LoadBalancerIdentifier struct {
	// Id of the loadbalancer
	Id string

	// Index of the loadbalancer
	//
	// If the state from which the index
	// was obtained is modified, this
	// index is invalidated and must not
	// be used further.
	Index int
}

type LoadBalancerNodePoolsOptions added in v0.10.0

type LoadBalancerNodePoolsOptions struct {
	UseProxy    bool
	IsStatic    bool
	Unreachable *spec.Unreachable
}

type LoadBalancerUnreachableNodes added in v0.10.0

type LoadBalancerUnreachableNodes struct {
	NodeStatus UnknownNodeStatus
	Diff       *KubernetesDiffResult
	Current    *spec.Clusters
	Desired    *spec.Clusters
}

type LoadBalancersDiffResult added in v0.10.0

type LoadBalancersDiffResult struct {
	// Loadbalancers that were added in the desired state
	// but are not present in the current state.
	Added []LoadBalancerIdentifier

	// IDs of the loadbalancers that were deleted in the desired
	// state but are present in the current state.
	Deleted []LoadBalancerIdentifier

	// LoadBalancers present in both views but have inner differences.
	Modified map[string]ModifiedLoadBalancer

	// State of the Api endpoint for the Loadbalancers.
	ApiEndpoint struct {
		// ID of the loadbalancer with the Api role.
		Current string

		// ID of the loadbalancers to which the Api role should be transferred to.
		New string

		// What action should be done based on the difference for the [Current] and [Desired].
		State spec.ApiEndpointChangeState
	}
}

LoadBalancersDiffResult holds all of the changes between two different spec.LoadBalancers states

func LoadBalancersDiff added in v0.10.0

func LoadBalancersDiff(old, desired *spec.LoadBalancers) LoadBalancersDiffResult

type LoadBalancersReconciliate added in v0.10.0

type LoadBalancersReconciliate struct {
	Hc      *HealthCheckStatus
	Diff    *LoadBalancersDiffResult
	Proxy   *ProxyDiffResult
	Current *spec.Clusters
	Desired *spec.Clusters
}

Wraps data and diffs needed by the reconciliation for loadbalancers attached to the kubernetes cluster.

The LoadBalancer reconciliation will only consider fixing drifts in the passed [LoadBalancerDiffResult] Others will only be used for guiding the decisions when scheduling the tasks and will not schedule tasks that will fix the drift in other Diff Results.

type ModifiedLoadBalancer added in v0.10.0

type ModifiedLoadBalancer struct {
	// Index in the previous state.
	//
	// If changes to the current state are
	// made the index is invalidated.
	CurrentIdx int

	// Index in the new state.
	//
	// If changes to the new state are
	// made the index is invalidated.
	DesiredIdx int

	// DNS changed
	DNS bool

	// Changes related to roles.
	Roles struct {
		// Name of the roles added.
		Added []string

		// Name of the roles deleted.
		Deleted []string

		// Name of the roles to which TargetPools were added
		// with the name of the Pools that were added.
		TargetPoolsAdded TargetPoolsViewType

		// Name of the roles from which TargetPools were deleted
		// with the name of the Pools that were deleted.
		TargetPoolsDeleted TargetPoolsViewType

		// Name of the roles for which internal settings have changed.
		// Internal Settings are:
		//  - Role.Settings
		//  - TargetPort
		InternalSettingsChanged []string

		// Name of the roles for which external settings have changed.
		// External settings are:
		// 	- Port
		// 	- Protocol
		ExternalSettingsChanged []string
	}

	// Diff in the Dynamic NodePools of the cluster.
	Dynamic NodePoolsDiffResult

	// Diff in the Static NodePools of the cluster.
	Static NodePoolsDiffResult

	// Dynamic nodes that are present in both the current and
	// desired state and are marked with [spec.NodeStatus_MarkedForDeletion]
	//
	// Reason why its included in the diff, is mostly that its a
	// pending diff to be handled at some point in the future.
	PendingDynamicDeletions PendingDeletionsViewType

	// Static nodes that are present in both the current and
	// desired state and are marked with [spec.NodeStatus_MarkedForDeletion]
	//
	// Reason why its included in the diff, is mostly that its a
	// pending diff to be handled at some point in the future.
	PendingStaticDeletions PendingDeletionsViewType

	// Nodepools that have their templates changed.
	RollingUpdate PendingRollingUpdates
}

type NodeDescription added in v0.11.2

type NodeDescription struct {
	K8sName string
	Ready   bool

	IsStatic   bool
	NodePool   string
	PublicIPv4 string
	IsControl  bool

	// Could be omitted.
	// If absent, can be interpreted as unknown state.
	LastTransitionTime *metav1.Time
}

type NodeOutput added in v0.11.2

type NodeOutput struct {
	Kind string `json:"kind"`

	Metadata struct {
		Name string `json:"name"`
	} `json:"metadata"`

	Status struct {
		Conditions []corev1.NodeCondition
	}
}

type NodePoolsDiffResult added in v0.10.0

type NodePoolsDiffResult struct {
	// PartialDeleted holds which nodepools were partially deleted, meaning that
	// the nodepools is present in both [NodePoolsViewType] but some of the nodes
	// are missing.
	//
	// The data is stored as map[NodePool.Name][]node.Names
	// Each entry in the map contains the NodePool and the Nodes that are
	// present in the first[NodePoolsViewType] but not in the other.
	PartiallyDeleted NodePoolsViewType

	// Delete holds which nodepools were completely deleted, meaning that
	// they're present in one [NodePoolsViewType] but no the other.
	//
	// The data is stored as map[NodePool.Name][]node.Names
	// Each entry in the map contains all of the node Names that are
	// present in the first [NodePoolsViewType] but not in the other.
	Deleted NodePoolsViewType

	// Same as [NodePoolsDiffResult.partiallyDeleted] but with newly added nodes.
	PartiallyAdded NodePoolsViewType

	// Same as [NodePoolsDiffResult.deleted] but with newly added Nodepools and their nodes.
	Added NodePoolsViewType
}

NodePoolsDiffResult holds all of the changes between two different [NodePoolViewType]

func NodePoolsDiff added in v0.10.0

func NodePoolsDiff(old, desired NodePoolsViewType) NodePoolsDiffResult

NodePoolsDiff calculates difference between two NodePoolsViewType and returns the result as a NodePoolsDiffResult.

func (*NodePoolsDiffResult) IsEmpty added in v0.10.0

func (r *NodePoolsDiffResult) IsEmpty() bool

type NodePoolsViewType added in v0.10.0

type NodePoolsViewType = map[string][]string

NodePoolsViewType is an unordered view into the nodepools and their nodes that are read from a spec.K8Scluster or a spec.LBcluster.

type PendingAutoscaledNodePoolTransitions added in v0.10.0

type PendingAutoscaledNodePoolTransitions = map[string]*spec.AutoscalerConf

PendingNodePoolTransitions is an unordered view into the nodepools which has moved to autoscaled type from normal.

type PendingDeletionsViewType added in v0.10.0

type PendingDeletionsViewType = map[string][]string

PendingNodeDeletions is an unordered view into the nodepools and their nodes that are marked with spec.NodeStatus_MarkedForDeletion

type PendingFixedNodePoolTransitions added in v0.10.0

type PendingFixedNodePoolTransitions = map[string]struct{}

PendingFixedNodePoolTransitions is an unordered view into the nodepools which has move from autoscaled to fixed type.

type PendingRollingUpdates added in v0.10.0

type PendingRollingUpdates map[string]*spec.TemplateRepository

PendingRollingUpdates is an unordered view into nodepools which are present in both the current and desired state but have different templates versions, meaning that a rolling update is required for the infrastructure.

type ProxyChange added in v0.10.0

type ProxyChange uint8
const (
	ProxyNoChange ProxyChange = iota
	ProxyOff
	ProxyOn
	ProxyRefresh
)

type ProxyDiffResult added in v0.10.0

type ProxyDiffResult struct {
	// Whether the proxy is in use in current state.
	CurrentUsed bool

	// Whether the proxy is in use in the desired state.
	DesiredUsed bool

	// Whether there is any change from the current to the
	// desired state.
	Change ProxyChange
}

type ScheduleResult added in v0.9.1

type ScheduleResult uint8

ScheduleResult describes what has happened during the scheduling of the tasks.

const (
	// Reschedule describes the case where the manifest should be rescheduled again
	// after either error-ing or completing.
	Reschedule ScheduleResult = iota

	// NoReschedule describes the case where the manifest should not be rescheduled again
	// after either error-ing or completing.
	NoReschedule

	// Noop describes the case where from the reconciliation of the current and desired
	// state no new tasks were identified, or an error occurred during the process, thus
	// no changes are to be worked on and no need to update the representation in the external
	// storage.
	Noop

	// NotReady describes the case where the manifest is not ready to be scheduled yet,
	// But needs to update its Database representation as changes have been made to it.
	NotReady
)

type Service added in v0.10.0

type Service struct {
	pb.UnimplementedManagerServiceServer
	// contains filtered or unexported fields
}

func New added in v0.10.0

func New(ctx context.Context, opts ...grpc.ServerOption) (*Service, error)

func (*Service) GetConfig added in v0.10.0

func (s *Service) GetConfig(ctx context.Context, request *pb.GetConfigRequest) (*pb.GetConfigResponse, error)

func (*Service) Handler added in v0.10.0

func (s *Service) Handler(msg jetstream.Msg)

func (*Service) ListConfigs added in v0.10.0

func (*Service) MarkForDeletion added in v0.10.0

func (s *Service) MarkForDeletion(ctx context.Context, request *pb.MarkForDeletionRequest) (*pb.MarkForDeletionResponse, error)

func (*Service) MarkNodeForDeletion added in v0.10.0

func (s *Service) MarkNodeForDeletion(ctx context.Context, request *pb.MarkNodeForDeletionRequest) (*pb.MarkNodeForDeletionResponse, error)

func (*Service) NodePoolUpdateTargetSize added in v0.10.0

func (s *Service) NodePoolUpdateTargetSize(ctx context.Context, request *pb.NodePoolUpdateTargetSizeRequest) (*pb.NodePoolUpdateTargetSizeResponse, error)

func (*Service) PerformHealthCheckAndUpdateStatus added in v0.10.0

func (s *Service) PerformHealthCheckAndUpdateStatus()

func (*Service) Serve added in v0.10.0

func (s *Service) Serve() error

Serve will create a service goroutine for each connection

func (*Service) Stop added in v0.10.0

func (s *Service) Stop() error

Stop will gracefully shutdown the gRPC server and the healthcheck server

func (*Service) UpsertManifest added in v0.10.0

func (s *Service) UpsertManifest(ctx context.Context, request *pb.UpsertManifestRequest) (*pb.UpsertManifestResponse, error)

func (*Service) WatchForDoneOrErrorDocuments added in v0.10.0

func (s *Service) WatchForDoneOrErrorDocuments(ctx context.Context) error

func (*Service) WatchForPendingDocuments added in v0.10.0

func (s *Service) WatchForPendingDocuments(ctx context.Context) error

func (*Service) WatchForScheduledDocuments added in v0.10.0

func (s *Service) WatchForScheduledDocuments(ctx context.Context) error

type Stores added in v0.10.0

type Stores struct {
	// contains filtered or unexported fields
}

type Taints added in v0.10.0

type Taints = []*spec.Taint

type TargetPoolsViewType added in v0.10.0

type TargetPoolsViewType = map[string][]string

TargetPoolsViewType is an unordered view into the diff for target pools that are from a spec.Role.

type UnknownNodeStatus added in v0.11.2

type UnknownNodeStatus struct {
	// Nodepools and their nodes in the kubernetes cluster with unknown status
	// that is read from the kubernetes API.
	//
	// This structure contains nodes that are:
	//  - nodes with no reachable endpoint.
	//  - present in the current state but not in the kubernetes cluster.
	//  - present in the kubernetes cluster but not in the current state
	//  - present in both but with Unknown status.
	//
	// If the Api server of the cluster is unreachable, this will be empty.
	UnknownKubernetesNodes map[string][]NodeDescription

	// Loadbalancers attached to the kubernetes cluster and for each
	// of them Nodepools with the nodes for which the pings have failed.
	UnknownLoadBalancersNodes map[string]UnreachableIPv4Map
}

func HealthCheckNodeReachability added in v0.10.0

func HealthCheckNodeReachability(
	logger zerolog.Logger,
	state *spec.Clusters,
	hc HealthCheckStatus,
) (UnknownNodeStatus, error)

HealthCheckNodeReachability performs healthchecks across the nodes of the passed in spec.Clusters state. If there are any issues with the pinging of the nodes a non-nil error is returned which can be interpreted as failing to fully determine if a node is reachable or not.

type UnreachableIPv4Map added in v0.10.0

type UnreachableIPv4Map = map[string][]string

UnreachableNodesMap holds the nodepools and all of the nodes within that nodepool that are unreachable via a Ping on the IPv4 public endpoint.

type Work added in v0.10.0

type Work struct {
	InputManifest string
	Cluster       string
	TaskID        string
	Stage         store.StageKind
	Result        *spec.TaskResult
}

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL