Documentation
¶
Index ¶
- Variables
- func BuildLoopBackManagerConfigMap(namespace string, name string, config LoopBackManagerConfig) (*corev1.ConfigMap, error)
- func CleanupAfterCustomTest(f *framework.Framework, driverCleanupFn func(), pod []*corev1.Pod, ...)
- func CleanupLoopbackDevices(f *framework.Framework) error
- func CreatePod(client clientset.Interface, namespace string, nodeSelector map[string]string, ...) (*v1.Pod, error)
- func DeployCSI(f *framework.Framework, additionalInstallArgs string) (func(), error)
- func DeployCSIComponents(f *framework.Framework, additionalInstallArgs string) (func(), error)
- func DeployOperator(f *framework.Framework) (func(), error)
- func GetExecutor() command.CmdExecutor
- func GetNodePodsNames(f *framework.Framework) ([]string, error)
- func MakePod(ns string, nodeSelector map[string]string, ...) *v1.Pod
- type BMDriverTestContextType
- type CmdHelmExecutor
- type HelmChart
- type HelmExecutor
- type LoopBackManagerConfig
- type LoopBackManagerConfigDevice
- type LoopBackManagerConfigNode
Constants ¶
This section is empty.
Variables ¶
var ( DriveGVR = schema.GroupVersionResource{ Group: apiV1.CSICRsGroupVersion, Version: apiV1.Version, Resource: "drives", } ACGVR = schema.GroupVersionResource{ Group: apiV1.CSICRsGroupVersion, Version: apiV1.Version, Resource: "availablecapacities", } ACRGVR = schema.GroupVersionResource{ Group: apiV1.CSICRsGroupVersion, Version: apiV1.Version, Resource: "availablecapacityreservations", } VolumeGVR = schema.GroupVersionResource{ Group: apiV1.CSICRsGroupVersion, Version: apiV1.Version, Resource: "volumes", } LVGGVR = schema.GroupVersionResource{ Group: apiV1.CSICRsGroupVersion, Version: apiV1.Version, Resource: "logicalvolumegroups", } CsibmnodeGVR = schema.GroupVersionResource{ Group: apiV1.CSICRsGroupVersion, Version: apiV1.Version, Resource: "nodes", } )
Functions ¶
func BuildLoopBackManagerConfigMap ¶
func BuildLoopBackManagerConfigMap(namespace string, name string, config LoopBackManagerConfig) (*corev1.ConfigMap, error)
BuildLoopBackManagerConfigMap returns ConfigMap with configuration for loopback manager
func CleanupAfterCustomTest ¶
func CleanupAfterCustomTest(f *framework.Framework, driverCleanupFn func(), pod []*corev1.Pod, pvc []*corev1.PersistentVolumeClaim)
CleanupAfterCustomTest cleanups all resources related to CSI plugin and plugin as well This function deletes pods if were created during test. And waits for its correct deletion to perform NodeUnpublish and NodeUnstage properly. Next it deletes PVC and waits for correctly deletion of bounded PV to clear device for next tests (CSI performs wipefs during PV deletion). The last step is the deletion of driver.
func CleanupLoopbackDevices ¶
CleanupLoopbackDevices executes in node pods drive managers containers kill -SIGHUP 1 Returns error if it's failed to get node pods
func CreatePod ¶
func CreatePod(client clientset.Interface, namespace string, nodeSelector map[string]string, pvclaims []*v1.PersistentVolumeClaim, isPrivileged bool, command string) (*v1.Pod, error)
CreatePod with given claims based on node selector Modified version of CreatePod function from k8s.io/kubernetes/test/e2e/framework/pod
func DeployCSI ¶
DeployCSI deploys csi-baremetal-deployment with CmdHelmExecutor After install - waiting all pods ready, checking kubernetes-scheduler restart Cleanup - deleting csi-chart, cleaning all csi custom resources Helm command - "helm install csi-baremetal <CHARTS_DIR>/csi-baremetal-deployment
--set image.tag=<CSI_VERSION> --set image.pullPolicy=IfNotPresent --set driver.drivemgr.type=loopbackmgr --set scheduler.patcher.enable=true --set scheduler.log.level=debug --set nodeController.log.level=debug --set driver.log.level=debug"
func DeployCSIComponents ¶
DeployCSIComponents deploys csi-baremetal-operator and csi-baremetal-deployment with CmdHelmExecutor and start print containers logs from framework namespace returns cleanup function and error if failed See DeployOperator and DeployCSI descriptions for more details
func DeployOperator ¶
DeployOperator deploys csi-baremetal-operator with CmdHelmExecutor After install - waiting before all pods ready Cleanup - deleting operator-chart and csi crds Helm command - "helm install csi-baremetal-operator <CHARTS_DIR>/csi-baremetal-operator
--set image.tag=<OPERATOR_VERSION> --set image.pullPolicy=IfNotPresent"
func GetExecutor ¶
func GetExecutor() command.CmdExecutor
GetExecutor initialize or just return utilExecutor
func GetNodePodsNames ¶
GetNodePodsNames tries to get slice of node pods names Receives framework.Framewor Returns slice of pods name, error if it's failed to get node pods
func MakePod ¶
func MakePod(ns string, nodeSelector map[string]string, pvclaims []*v1.PersistentVolumeClaim, isPrivileged bool, command string) *v1.Pod
MakePod returns a pod definition based on the namespace. The pod references the PVC's name. A slice of BASH commands can be supplied as args to be run by the pod Modified version of MakePod function from k8s.io/kubernetes/test/e2e/framework/pod which support Block volumes
Types ¶
type BMDriverTestContextType ¶
BMDriverTestContextType stores custom testing context
var BMDriverTestContext BMDriverTestContextType
type CmdHelmExecutor ¶
type CmdHelmExecutor struct {
// contains filtered or unexported fields
}
CmdHelmExecutor is HelmExecutor implementation using os/exec.Cmd
func (*CmdHelmExecutor) DeleteRelease ¶
func (c *CmdHelmExecutor) DeleteRelease(ch *HelmChart) error
DeleteRelease call "helm delete" for chart
func (*CmdHelmExecutor) InstallRelease ¶
func (c *CmdHelmExecutor) InstallRelease(ch *HelmChart, args string) error
InstallRelease calls "helm install" for chart with set args and creates namespace if not created
type HelmChart ¶
type HelmChart struct {
// contains filtered or unexported fields
}
HelmChart stores info about chart in filesystem
type HelmExecutor ¶
type LoopBackManagerConfig ¶
type LoopBackManagerConfig struct {
DefaultDriveCount *int `yaml:"defaultDrivePerNodeCount,omitempty"`
DefaultDriveSize *string `yaml:"defaultDriveSize,omitempty"`
Nodes []LoopBackManagerConfigNode `yaml:"nodes,omitempty"`
}
LoopBackManagerConfig struct is the configuration for LoopBackManager. It contains default settings and settings for each node
type LoopBackManagerConfigDevice ¶
type LoopBackManagerConfigDevice struct {
VendorID *string `yaml:"vid,omitempty"`
ProductID *string `yaml:"pid,omitempty"`
SerialNumber *string `yaml:"serialNumber,omitempty"`
Size *string `yaml:"size,omitempty"`
Removed *bool `yaml:"removed,omitempty"`
Health *string `yaml:"health,omitempty"`
DriveType *string `yaml:"driveType,omitempty"`
}
LoopBackManagerConfigDevice struct contains fields to describe a loop device bound with a file
type LoopBackManagerConfigNode ¶
type LoopBackManagerConfigNode struct {
NodeID *string `yaml:"nodeID,omitempty"`
DriveCount *int `yaml:"driveCount,omitempty"`
Drives []LoopBackManagerConfigDevice `yaml:"drives,omitempty"`
}
LoopBackManagerConfigNode struct represents particular configuration of LoopBackManager for specified node