utils

package
v0.2.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 24, 2026 License: MIT Imports: 29 Imported by: 0

Documentation

Index

Constants

View Source
const (
	// MaxKubernetesNameLength is the maximum length for Kubernetes resource names
	MaxKubernetesNameLength  = 63
	MaxKubernetesLabelLength = 63
	// DefaultHashLength is the default length of the hash suffix
	DefaultHashLength = 8
)
View Source
const (
	// LabelAMDGPUDeviceID is the primary label for AMD GPU device identification.
	// Values are PCI device IDs (e.g., "74a1" for MI300X).
	LabelAMDGPUDeviceID = "amd.com/gpu.device-id"

	// LabelAMDGPUDeviceIDBeta is the beta version of the device ID label.
	LabelAMDGPUDeviceIDBeta = "beta.amd.com/gpu.device-id"

	// LabelAMDGPUFamily is the label for AMD GPU family (e.g., "AI", "NV").
	LabelAMDGPUFamily = "amd.com/gpu.family"

	// LabelAMDGPUFamilyBeta is the beta version of the family label.
	LabelAMDGPUFamilyBeta = "beta.amd.com/gpu.family"

	// LabelAMDGPUVRAM is the label for AMD GPU VRAM capacity (e.g., "192G").
	LabelAMDGPUVRAM = "amd.com/gpu.vram"

	// LabelAMDGPUVRAMBeta is the beta version of the VRAM label.
	LabelAMDGPUVRAMBeta = "beta.amd.com/gpu.vram"
)

GPU node label keys for AMD GPUs.

View Source
const (
	VRAMSourceLabel   = "label"
	VRAMSourceStatic  = "static"
	VRAMSourceUnknown = "unknown"
)

VRAM source values for GetGPUVRAM function.

View Source
const (
	DebugLogLevel = 1
)
View Source
const (
	DefaultPVCHeadroomPercent = int32(10)
)
View Source
const EnvVarAIMEngineArgs = "AIM_ENGINE_ARGS"

EnvVarAIMEngineArgs is the env var name for AIM engine arguments that should be JSON-merged.

View Source
const MaxSupportedBytes = int64(7 << 60) // 7 EiB

MaxSupportedBytes is the maximum byte size that can be formatted (just under 8 EiB). This is set to 8 EiB - 1 byte to ensure formatted output stays within reasonable bounds.

View Source
const (
	// ResourcePrefixAMD is the resource name prefix for AMD GPUs.
	ResourcePrefixAMD = "amd.com/"
)

GPU resource name prefixes.

Variables

View Source
var (
	// ErrNegativeSize is returned when a negative byte size is provided.
	ErrNegativeSize = errors.New("byte size cannot be negative")

	// ErrSizeTooLarge is returned when the byte size exceeds the maximum supported value.
	ErrSizeTooLarge = errors.New("byte size exceeds maximum supported value (8 EiB)")
)
View Source
var KnownAmdDevices = map[string]string{

	"738c": "MI100",
	"738e": "MI100",
	"7408": "MI250X",
	"740c": "MI250X",
	"740f": "MI210",
	"7410": "MI210",
	"74a0": "MI300A",
	"74a1": "MI300X",
	"74a2": "MI308X",
	"74a5": "MI325X",
	"74a8": "MI308X",
	"74a9": "MI300X",
	"74b5": "MI300X",
	"74b6": "MI308X",
	"74b9": "MI325X",
	"74bd": "MI300X",
	"75a0": "MI350X",
	"75a3": "MI355X",
	"75b0": "MI350X",
	"75b3": "MI355X",

	"7460": "V710",
	"7461": "V710",
	"7448": "W7900",
	"744a": "W7900",
	"7449": "W7800",
	"745e": "W7800",
	"73a2": "W6900X",
	"73a3": "W6800",
	"73ab": "W6800X",
	"73a1": "V620",
	"73ae": "V620",

	"7550": "RX9070",
	"744c": "RX7900",
	"73af": "RX6900",
	"73bf": "RX6800",
}

KnownAmdDevices maps AMD GPU device IDs (PCI device IDs) to their commercial model names. This mapping is used to identify GPU models from node labels when the device ID is available. Device IDs are typically exposed by AMD GPU labelers (e.g., amd.com/gpu.device-id).

The mapping includes:

  • AMD Instinct accelerators (MI series): MI100, MI210, MI250X, MI300X, MI308X, MI325X, MI350X, MI355X
  • AMD Radeon Pro workstation GPUs: W6800, W6900X, W7800, W7900, V620, V710
  • AMD Radeon gaming GPUs: RX6800, RX6900, RX7900, RX9070

Note: Some device IDs map to the same model (e.g., multiple MI300X variants). Device IDs may represent different variants, revisions, or virtualization flavors (MxGPU, VF, HF).

View Source
var KnownGPUVRAM = map[string]string{

	"MI355X": "288G",
	"MI350X": "288G",
	"MI325X": "256G",
	"MI308X": "128G",
	"MI300X": "192G",
	"MI300A": "128G",
	"MI250X": "128G",
	"MI210":  "64G",
	"MI100":  "32G",

	"V710":   "32G",
	"W7900":  "48G",
	"W7800":  "32G",
	"W6900X": "32G",
	"W6800":  "32G",
	"W6800X": "32G",
	"V620":   "32G",

	"RX9070": "16G",
	"RX7900": "24G",
	"RX6900": "16G",
	"RX6800": "16G",
}

KnownGPUVRAM provides fallback VRAM values when node labels are unavailable. Values are per-GPU VRAM capacity in the format used by AMD device plugin labels (e.g., "192G"). This mapping is used when amd.com/gpu.vram or beta.amd.com/gpu.vram labels are not present.

View Source
var KnownVRAMTiers = []string{
	"16G", "24G", "32G", "48G", "64G", "128G", "192G", "256G", "288G",
}

KnownVRAMTiers is a sorted list of all known VRAM capacity values. Used for filtering GPUs by minimum VRAM requirement.

Functions

func ApplyHeadroomAndRound

func ApplyHeadroomAndRound(baseSizeBytes int64, headroomPercent int32) int64

ApplyHeadroomAndRound applies headroom percentage to a base size and rounds up to the nearest Gi. This ensures PVC sizes are clean, human-readable values (e.g., "421Gi" instead of "451936812032").

Parameters:

  • baseSizeBytes: The original size in bytes
  • headroomPercent: Percentage of extra space to add (0-100, e.g., 10 means 10% extra)

Returns:

  • The final size in bytes, rounded up to the nearest Gi boundary

Example:

  • Input: 9,094,593,249 bytes with 10% headroom
  • With headroom: 10,004,052,573 bytes (9.31 Gi)
  • Rounded: 10,737,418,240 bytes (10 Gi)

func BuildKeychain

func BuildKeychain(
	ctx context.Context,
	clientset kubernetes.Interface,
	secretNamespace string,
	imagePullSecrets []corev1.LocalObjectReference,
) (authn.Keychain, error)

BuildKeychain creates an authn.Keychain for authenticating to container registries. It uses Kubernetes image pull secrets if provided, otherwise falls back to the default keychain.

Parameters:

  • ctx: Context for the operation
  • clientset: Kubernetes clientset for accessing secrets (can be nil)
  • secretNamespace: Namespace where secrets are located
  • imagePullSecrets: List of secret references for authentication

Returns:

  • authn.Keychain: Configured keychain for authentication
  • error: Any error encountered during keychain creation

func BuildOwnerReference

func BuildOwnerReference(obj client.Object, scheme *runtime.Scheme) metav1.OwnerReference

BuildOwnerReference creates a controller owner reference for the given object.

func CopyEnvVars

func CopyEnvVars(in []corev1.EnvVar) []corev1.EnvVar

CopyEnvVars returns a deep copy of the provided environment variables slice.

func CopyPullSecrets

CopyPullSecrets returns a deep copy of the provided image pull secrets slice.

func Debug

func Debug(logger logr.Logger, fmt string, keysAndValues ...any)

Debug logs a debug-level message using the provided logger. Debug messages are logged at verbosity level 1 and will only appear when the logger is configured to show debug output.

Parameters:

  • logger: The logr.Logger instance to use for logging
  • fmt: The message format string
  • keysAndValues: Optional key-value pairs for structured logging

func DeepMergeMap

func DeepMergeMap(dst, src map[string]any)

DeepMergeMap recursively merges src into dst. Values from src take precedence. Nested maps are merged recursively.

func ExtractAMDModel

func ExtractAMDModel(labels map[string]string) string

ExtractAMDModel extracts the AMD GPU model name from node labels. It tries multiple label sources in order of preference:

  1. Device ID labels (amd.com/gpu.device-id or beta.amd.com/gpu.device-id) - most accurate
  2. Count-encoded device ID labels (e.g., amd.com/gpu.device-id.74a1=4)
  3. GPU family labels (amd.com/gpu.family or beta.amd.com/gpu.family)
  4. Count-encoded GPU family labels

Returns a normalized GPU model name (e.g., "MI300X") or empty string if not found. Device IDs are mapped using KnownAmdDevices for precise identification.

func ExtractGPUModelFromNodeLabels

func ExtractGPUModelFromNodeLabels(labels map[string]string, resourceName string) string

ExtractGPUModelFromNodeLabels extracts the GPU model from node labels. Supports multiple AMD GPU label formats:

  • AMD: amd.com/gpu.device-id (primary), beta.amd.com/gpu.device-id, amd.com/gpu.family, and count-encoded variants (e.g., amd.com/gpu.device-id.74a1=4)

Returns a normalized GPU model name (e.g., "MI300X") or empty string if model cannot be determined. Nodes with GPU resources but insufficient labels will be excluded from template matching.

func FindSuccessfulPodForJob

func FindSuccessfulPodForJob(ctx context.Context, k8sClient client.Client, job *batchv1.Job) (*corev1.Pod, error)

FindSuccessfulPodForJob locates a successfully completed pod for the given job.

func FormatBytesHumanReadable

func FormatBytesHumanReadable(bytes int64) (string, error)

FormatBytesHumanReadable converts bytes to a human-readable string with two significant digits (e.g., "42 GiB", "1.5 TiB", "850 MiB"). Returns an error if bytes is negative or exceeds MaxSupportedBytes.

func GenerateDerivedName

func GenerateDerivedName(nameParts []string, opts ...NameOption) (string, error)

GenerateDerivedName creates a deterministic name for a derived resource. It combines multiple name parts with an optional hash suffix, ensuring the result is a valid Kubernetes name (lowercase alphanumeric and hyphens).

Format:

  • With hash: {part1}-{part2}-...-{partN}-{hash}
  • Without hash: {part1}-{part2}-...-{partN}

If the combined name exceeds the max length, the longest part is iteratively truncated.

Options:

  • WithHashSource(...): Values to hash for the suffix (required for hash)
  • WithHashLength(n): Number of hash characters (default: 8)
  • WithMaxLength(n): Maximum name length (default: 63)

Example:

name, _ := GenerateDerivedName([]string{"my-service", "temp"},
    WithHashSource("metric=latency", "precision=fp16"))
// Returns: "my-service-temp-a1b2c3d4"

name, _ := GenerateDerivedName([]string{"my-service", "temp-cache"})
// Returns: "my-service-temp-cache" (no hash)

name, _ := GenerateDerivedName([]string{"my-service"},
    WithHashSource(namespace), WithMaxLength(30))
// Returns truncated name to fit 30 chars

func GetAMDDeviceIDsForMinVRAM

func GetAMDDeviceIDsForMinVRAM(minVRAMBytes int64, gpuModel string) []string

GetAMDDeviceIDsForMinVRAM returns all AMD device IDs for GPUs that have VRAM >= minVRAMBytes.

If gpuModel is specified (non-empty), only returns device IDs for that specific model if it meets the VRAM requirement. Returns empty slice if the model doesn't meet the requirement.

If gpuModel is empty, returns device IDs for ALL GPU models meeting the VRAM requirement.

func GetAMDDeviceIDsForModel

func GetAMDDeviceIDsForModel(modelName string) []string

GetAMDDeviceIDsForModel returns all AMD device IDs that map to a given GPU model name. This is the inverse of MapAMDDeviceIDToModel, allowing lookup of all device IDs for a model. Example: GetAMDDeviceIDsForModel("MI300X") returns ["74a1", "74a9", "74b5", "74bd"] Returns empty slice if the model is not found or is not an AMD GPU.

func GetClusterGPUResources

func GetClusterGPUResources(ctx context.Context, k8sClient client.Client) (map[string]GPUResourceInfo, error)

GetClusterGPUResources returns an aggregated view of all GPU resources in the cluster. It scans all nodes and aggregates resources that start with "amd.com/". Returns a map where keys are GPU models (e.g., "MI300X") extracted from node labels, and values contain the resource name.

func GetGPUModelsWithMinVRAM

func GetGPUModelsWithMinVRAM(minVRAMBytes int64) []string

GetGPUModelsWithMinVRAM returns all GPU model names that have VRAM >= minVRAMBytes. Uses the static KnownGPUVRAM mapping to determine which models meet the requirement. Returns a list of normalized model names (e.g., ["MI300X", "MI325X", "MI355X"]).

func GetGPUVRAM

func GetGPUVRAM(gpuModel string, nodeLabels map[string]string) (vram string, source string)

GetGPUVRAM returns the VRAM capacity for a GPU, checking node labels first, then static mapping. Returns the VRAM value (e.g., "192G") and the source (VRAMSourceLabel, VRAMSourceStatic, or VRAMSourceUnknown).

func GetPVCHeadroomPercent

func GetPVCHeadroomPercent(spec *aimv1alpha1.AIMRuntimeConfigCommon) int32

GetPVCHeadroomPercent returns the PVC headroom percentage from the runtime config spec. If not set, returns the default value defined in DefaultPVCHeadroomPercent.

func GetVRAMTiersAboveThreshold

func GetVRAMTiersAboveThreshold(minVRAMBytes int64) []string

GetVRAMTiersAboveThreshold returns all known VRAM tier values that meet or exceed the threshold. The threshold should be a VRAM string (e.g., "64G") or a resource.Quantity string. Returns values in the format used by device plugin labels (e.g., ["64G", "80G", "128G", "192G"]).

func HasOwnerReference

func HasOwnerReference(refs []metav1.OwnerReference, uid types.UID) bool

HasOwnerReference checks if the given UID exists in the owner references list.

func IsGPUAvailable

func IsGPUAvailable(ctx context.Context, k8sClient client.Client, gpuModel string) (bool, error)

IsGPUAvailable checks if a specific GPU model is available in the cluster. The gpuModel parameter should be the GPU model name (e.g., "MI300X"), not the resource name. The input is normalized to handle variants like "MI300X (rev 2)" or "Instinct MI300X".

func IsGPUResource

func IsGPUResource(resourceName string) bool

IsGPUResource checks if a resource name represents a GPU resource. Returns true if the resource name starts with "amd.com/".

func IsJobComplete

func IsJobComplete(job *batchv1.Job) bool

IsJobComplete returns true if the job has completed (successfully or failed)

func IsJobFailed

func IsJobFailed(job *batchv1.Job) bool

IsJobFailed returns true if the job failed

func IsJobSucceeded

func IsJobSucceeded(job *batchv1.Job) bool

IsJobSucceeded returns true if the job completed successfully

func ListAvailableGPUs

func ListAvailableGPUs(ctx context.Context, k8sClient client.Client) ([]string, error)

ListAvailableGPUs returns a list of all GPU resource types available in the cluster.

func MakeRFC1123Compliant

func MakeRFC1123Compliant(s string) string

MakeRFC1123Compliant converts a string to be RFC 1123 compliant (lowercase, alphanumeric, hyphens, max 63 chars)

func MapAMDDeviceIDToModel

func MapAMDDeviceIDToModel(deviceID string) string

MapAMDDeviceIDToModel maps AMD device IDs to model names. Comprehensive mapping covering AMD Instinct, Radeon Pro, and Radeon GPUs.

func MergeConfigs

func MergeConfigs[T any](dst *T, srcs ...T) error

MergeConfigs merges multiple config structs with later configs taking precedence. Uses key-based merging for []corev1.EnvVar fields (by Name). The dst must be a pointer to the destination struct.

Example:

var resolved AIMRuntimeConfigCommon
err := MergeConfigs(&resolved, clusterConfig, namespaceConfig, serviceConfig)

func MergeEnvVars

func MergeEnvVars(defaults, overrides []corev1.EnvVar, jsonMergeKeys ...string) []corev1.EnvVar

MergeEnvVars merges two env var slices with overrides taking precedence over defaults. Env vars are keyed by Name, matching the +listMapKey=name kubebuilder annotation. If jsonMergeKeys is provided, env vars with those names are deep-merged as JSON objects instead of being replaced. This is useful for AIM_ENGINE_ARGS which should merge contributions from multiple sources.

Example:

merged := MergeEnvVars(defaults, overrides)
// overrides take precedence over defaults

merged := MergeEnvVars(defaults, overrides, "AIM_ENGINE_ARGS")
// AIM_ENGINE_ARGS values are deep-merged as JSON, others are replaced

func MergeJSONEnvVarValues

func MergeJSONEnvVarValues(base, higher string) string

MergeJSONEnvVarValues deep-merges two JSON object strings. The higher precedence value (from overrides) takes priority in case of key conflicts. Non-JSON values or parsing errors result in the higher precedence value being used directly.

func MergeOptions

func MergeOptions() []func(*mergo.Config)

MergeOptions returns the standard mergo options for config merging. This includes WithOverride for scalar fields and the envVarMergeTransformer for key-based slice merging of []corev1.EnvVar fields.

func MergePullSecretRefs

func MergePullSecretRefs(base []corev1.LocalObjectReference, extras []corev1.LocalObjectReference) []corev1.LocalObjectReference

MergePullSecretRefs merges image pull secrets from base and extras, avoiding duplicates. Extras take precedence when there's a name collision.

func NodeGPUChangePredicate

func NodeGPUChangePredicate() predicate.Predicate

NodeGPUChangePredicate returns a predicate that triggers reconciles when GPU-related node attributes change.

func NormalizeGPUModel

func NormalizeGPUModel(model string) string

NormalizeGPUModel normalizes GPU model names for consistency. Examples:

  • "MI300X (rev 2)" -> "MI300X"
  • "Instinct-MI325X" -> "MI325X"
  • "RX7900-XTX" -> "RX7900"

func ParseVRAMToBytes

func ParseVRAMToBytes(vram string) int64

ParseVRAMToBytes parses a VRAM string (e.g., "192G") to bytes. Supports G (gigabytes) and T (terabytes) suffixes. Returns 0 if the format is not recognized.

func QuantityWithHeadroom

func QuantityWithHeadroom(baseSizeBytes int64, headroomPercent int32) resource.Quantity

QuantityWithHeadroom creates a resource.Quantity with headroom applied and rounded to the nearest Gi. This is a convenience wrapper around ApplyHeadroomAndRound that returns a Kubernetes Quantity.

The returned Quantity uses BinarySI format (Ki, Mi, Gi, Ti suffixes) for compatibility with Kubernetes storage resources.

Parameters:

  • baseSizeBytes: The original size in bytes
  • headroomPercent: Percentage of extra space to add (0-100)

Returns:

  • A resource.Quantity representing the size with headroom, formatted cleanly

func ResolveStorageClass

func ResolveStorageClass(explicitStorageClass string, runtimeConfigSpec *aimv1alpha1.AIMRuntimeConfigCommon) string

ResolveStorageClass determines the effective storage class using fallback logic:

  1. Use explicit storage class if provided (non-empty)
  2. Fall back to runtime config's Storage.DefaultStorageClassName if explicit is empty
  3. Fall back to runtime config's top-level DefaultStorageClassName (deprecated location)
  4. Empty string means use the cluster's default StorageClass

This implements consistent storage class resolution across all PVC creation paths. The function checks two locations in the runtime config for backwards compatibility:

  • runtimeConfigSpec.Storage.DefaultStorageClassName (current/preferred)
  • runtimeConfigSpec.DefaultStorageClassName (deprecated/legacy)

Parameters:

  • explicitStorageClass: Storage class explicitly specified in the resource spec
  • runtimeConfigSpec: The resolved runtime configuration spec

Returns:

  • The effective storage class name (may be empty to use cluster default)

func SanitizeLabelValue

func SanitizeLabelValue(s string) (string, error)

SanitizeLabelValue converts a string to a valid Kubernetes label value. Valid label values must: - Be empty or consist of alphanumeric characters, '-', '_' or '.' - Start and end with an alphanumeric character - Be at most 63 characters Returns an error if the sanitized value is empty.

func SelectBest

func SelectBest[T any](items []T, getStatus func(T) constants.AIMStatus) T

SelectBest returns the item with the highest priority status from the slice. Returns the zero value if the slice is empty. The getStatus function extracts the AIMStatus from each item.

func SelectBestPtr

func SelectBestPtr[T any](items []T, getStatus func(*T) constants.AIMStatus) *T

SelectBestPtr returns a pointer to the item with the highest priority status. Returns nil if the slice is empty. This variant is useful when working with slices of structs where you need a pointer result.

func SelectWorst

func SelectWorst[T any](items []T, getStatus func(T) constants.AIMStatus) T

SelectWorst returns the item with the lowest priority status from the slice. Returns the zero value if the slice is empty. The getStatus function extracts the AIMStatus from each item.

func SelectWorstPtr

func SelectWorstPtr[T any](items []T, getStatus func(*T) constants.AIMStatus) *T

SelectWorstPtr returns a pointer to the item with the lowest priority status. Returns nil if the slice is empty.

func ValueOrDefault

func ValueOrDefault[T any](d *T) T

ValueOrDefault returns the value pointed to by d, or the zero value of type T if d is nil. This is a generic helper to safely dereference pointers with a fallback to the type's zero value.

Example:

var ptr *int = nil
val := ValueOrDefault(ptr)  // Returns 0

ptr2 := ptr.To(42)
val2 := ValueOrDefault(ptr2)  // Returns 42

Types

type ByteUnit

type ByteUnit struct {
	Size   int64
	Suffix string
}

ByteUnit represents a unit of digital storage.

type GPUResourceInfo

type GPUResourceInfo struct {
	// ResourceName is the full Kubernetes resource name (e.g., "amd.com/gpu").
	ResourceName string

	// VRAM is the GPU VRAM capacity in the format used by device plugin labels (e.g., "192G").
	// Empty string if VRAM information is not available.
	VRAM string

	// VRAMSource indicates how the VRAM value was determined:
	// "label" = from node label, "static" = from KnownGPUVRAM, "unknown" = not available.
	VRAMSource string
}

GPUResourceInfo contains GPU resource information for a specific GPU model.

type ImageParts

type ImageParts struct {
	Registry   string // e.g., "ghcr.io" or "docker.io"
	Repository string // e.g., "silogen/llama-3-8b" (full repository path)
	Name       string // e.g., "llama-3-8b" (just the image name, last component)
	Tag        string // e.g., "v1.2.0" or "abc123" (first 6 chars of digest)
}

func ExtractImageParts

func ExtractImageParts(image string) (ImageParts, error)

type ImagePullError

type ImagePullError struct {
	Type            ImagePullErrorType
	Container       string
	Reason          string // e.g., "ImagePullBackOff", "ErrImagePull"
	Message         string // Full error message from Kubernetes
	IsInitContainer bool
}

ImagePullError contains categorized information about an image pull failure

func CheckJobPodImagePullStatus

func CheckJobPodImagePullStatus(ctx context.Context, k8sClient client.Client, job *batchv1.Job, namespace string) (*ImagePullError, error)

CheckJobPodImagePullStatus checks if a job's pod is stuck in ImagePullBackOff or ErrImagePull state. Returns the image pull error details if found, or nil otherwise.

func CheckPodImagePullStatus

func CheckPodImagePullStatus(pod *corev1.Pod) *ImagePullError

CheckPodImagePullStatus checks if a pod has any containers stuck in ImagePullBackOff or ErrImagePull state. It examines both regular containers and init containers. Returns the first ImagePullError found, or nil if no image pull issues are detected.

type ImagePullErrorType

type ImagePullErrorType string

ImagePullErrorType categorizes image pull errors

const (
	ImagePullErrorAuth     ImagePullErrorType = "auth"
	ImagePullErrorNotFound ImagePullErrorType = "not-found"
	ImagePullErrorGeneric  ImagePullErrorType = "generic"
)

func CategorizeRegistryError

func CategorizeRegistryError(err error) ImagePullErrorType

CategorizeRegistryError analyzes a registry error to determine its type. It first checks for structured transport.Error with HTTP status codes, then falls back to text-based error message parsing for other error types.

type ImageRegistryError

type ImageRegistryError struct {
	Type    ImagePullErrorType
	Message string
	Cause   error
}

ImageRegistryError wraps registry access errors with categorization

func (*ImageRegistryError) Error

func (e *ImageRegistryError) Error() string

func (*ImageRegistryError) Unwrap

func (e *ImageRegistryError) Unwrap() error

type NameOption

type NameOption func(*nameConfig)

NameOption configures name generation behavior.

func WithHashLength

func WithHashLength(length int) NameOption

WithHashLength specifies the number of characters to use from the hash (default: 8). Set to 0 to disable hash suffix even if hash sources are provided.

func WithHashSource

func WithHashSource(inputs ...any) NameOption

WithHashSource specifies the values to hash for the name suffix. Multiple values are combined for deterministic hashing. Slices and maps are sorted recursively to ensure determinism.

func WithMaxLength

func WithMaxLength(length int) NameOption

WithMaxLength specifies a custom maximum length for the generated name. This is useful when the name will have additional suffixes added by external systems. For example, KServe adds "-predictor-{namespace}" to InferenceService names. Default is 63 (MaxKubernetesNameLength).

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL