README
¶
Integration Tests
Comprehensive integration test framework for cd-operator that verifies end-to-end workflows with real Kubernetes clusters.
Overview
The integration test framework provides:
- Framework Setup: Kubernetes cluster management using controller-runtime envtest
- Mock Servers: GitHub and ArgoCD API mock servers for isolated testing
- Test Fixtures: Pre-configured CRD instances and test data
- Helper Utilities: Common patterns for resource management, assertions, and waiting
Architecture
tests/integration/
├── framework/ # Test infrastructure
│ ├── cluster.go # Kubernetes test cluster management
│ ├── fixtures.go # Test fixtures and sample resources
│ ├── github_mock.go # Mock GitHub API server
│ ├── argocd_mock.go # Mock ArgoCD API server
│ └── helpers.go # Common test utilities
├── pr_lifecycle_test.go # PR workflow tests
├── drift_detection_test.go # Drift monitoring tests
├── controller_reconcile_test.go # Controller behavior tests
└── README.md # This file
Running Tests
Prerequisites
- Go 1.26+: Tests require Go 1.26 or later
- kubectl: Required for cluster operations
- kind or k3d (optional): For external cluster testing
Local Execution
Run integration tests using the integration build tag:
# Run all integration tests
go test -v -tags=integration -timeout=10m ./tests/integration/...
# Run specific test
go test -v -tags=integration -timeout=10m ./tests/integration/ -run TestPRLifecycle_Discovery
# Run with race detector
go test -v -race -tags=integration -timeout=10m ./tests/integration/...
# Run with coverage
go test -v -tags=integration -coverprofile=integration-coverage.out ./tests/integration/...
CI/CD Execution
Integration tests run automatically in GitHub Actions via .github/workflows/integration.yml:
# Triggered on:
# - Pull requests to main
# - Manual workflow dispatch
# Workflow steps:
# 1. Setup Go environment
# 2. Create kind cluster
# 3. Install CRDs
# 4. Run integration tests
# 5. Collect logs on failure
Using Existing Cluster
To use an existing Kubernetes cluster (like kind):
# Create kind cluster
kind create cluster --name cd-operator-test
# Export kubeconfig
export KUBECONFIG="$(kind get kubeconfig --name cd-operator-test)"
# Install CRDs
kubectl apply -f config/crd/bases/
# Run tests
go test -v -tags=integration -timeout=10m ./tests/integration/...
# Cleanup
kind delete cluster --name cd-operator-test
Test Suites
PR Lifecycle Tests (pr_lifecycle_test.go)
Tests the complete pull request workflow:
- Discovery: PullRequestTracker CRD creation
- Qualification: PR validation and label updates
- Merge Flow: Auto-merge and status updates
- Failed Qualification: Error handling
- Multiple Trackers: Concurrent PR handling
go test -v -tags=integration -run TestPRLifecycle ./tests/integration/
Drift Detection Tests (drift_detection_test.go)
Tests deployment drift monitoring:
- Create Monitor: DriftMonitor CRD creation
- Synced State: In-sync deployment verification
- Drift Detected: Out-of-sync detection
- Status Updates: Sync state transitions
- Conditions: Condition management
- Multiple Monitors: Multi-cluster monitoring
go test -v -tags=integration -run TestDriftDetection ./tests/integration/
Controller Reconcile Tests (controller_reconcile_test.go)
Tests controller reconciliation behavior:
- PullRequestTracker Reconciliation: Basic reconcile flow
- DriftMonitor Reconciliation: Drift monitor reconcile flow
- Status Updates: Status subresource updates
- Condition Management: Kubernetes condition API
- Deletion: Resource cleanup and finalizers
- Cascading Deletion: Owner reference handling
go test -v -tags=integration -run TestControllerReconcile ./tests/integration/
Framework Components
TestCluster (framework/cluster.go)
Manages Kubernetes test environments using envtest:
// Create test cluster
cluster := framework.NewTestCluster(t, framework.TestClusterOptions{
CRDDirectoryPaths: []string{"../../config/crd/bases"},
Timeout: 10 * time.Minute,
})
defer cluster.Teardown()
// Access cluster resources
client := cluster.Client()
ctx := cluster.Context()
Features:
- Automatic CRD installation
- Controller-runtime client setup
- Cleanup on test completion
- Support for existing clusters (via KUBECONFIG)
Fixtures (framework/fixtures.go)
Provides pre-configured test resources:
fixtures := framework.NewFixtures()
// Create PullRequestTracker
tracker := fixtures.NewPullRequestTracker(namespace, "pr-123", 123)
// Create DriftMonitor
monitor := fixtures.NewDriftMonitor(namespace, "drift-1", "tracker-ref")
// Create with specific state
tracker := fixtures.PRTrackerWithState(namespace, "pr-456", 456, "qualified")
// Create with conditions
tracker := fixtures.PRTrackerWithCondition(
namespace, "pr-789", 789,
"Qualified", "True", "ValidationPassed", "All checks passed",
)
Mock Servers
GitHub Mock (framework/github_mock.go)
Simulates GitHub API for testing:
// Create mock server
githubMock := framework.NewGitHubMockServer()
defer githubMock.Close()
// Add mock PR
githubMock.AddPR(&framework.MockPullRequest{
Number: 123,
Title: "Test PR",
State: "open",
HeadSHA: "abc123",
Mergeable: true,
})
// Get mock URL
url := githubMock.URL()
Supported Endpoints:
GET /repos/:owner/:repo/pulls/:number- Get PRGET /repos/:owner/:repo/pulls- List PRsPUT /repos/:owner/:repo/pulls/:number/merge- Merge PRGET/POST/DELETE /repos/:owner/:repo/issues/:number/labels- Label operationsGET /repos/:owner/:repo/pulls/:number/reviews- PR reviewsGET /repos/:owner/:repo/commits/:sha/status- Commit statuses
ArgoCD Mock (framework/argocd_mock.go)
Simulates ArgoCD API for testing:
// Create mock server
argoCDMock := framework.NewArgoCDMockServer()
defer argoCDMock.Close()
// Add mock application
argoCDMock.AddApplication(framework.NewMockApplicationSynced(
"my-app", "default", "abc123",
))
// Update sync status
argoCDMock.UpdateApplicationSync("my-app", "Synced", "def456")
// Get mock URL
url := argoCDMock.URL()
Supported Endpoints:
GET /api/v1/applications/:name- Get applicationGET /api/v1/applications- List applicationsGET /api/version- ArgoCD versionGET /healthz- Health check
Helpers (framework/helpers.go)
Common test utilities:
// Wait for resource to exist
err := framework.WaitForResourceToExist(ctx, client, key, obj, timeout)
// Create and wait
err := framework.CreateAndWait(ctx, t, client, obj, timeout)
// Delete and wait
err := framework.DeleteAndWait(ctx, t, client, obj, timeout)
// Assert resource exists
framework.AssertResourceExists(t, ctx, client, key, obj)
// Assert resource not found
framework.AssertResourceNotFound(t, ctx, client, key, obj)
// Wait for condition
framework.AssertEventuallyTrue(t, ctx, func() bool {
return someCondition()
}, timeout, "condition failed")
// Generate unique names
name := framework.GenerateUniqueName("test-pr")
Writing New Tests
Basic Test Structure
//go:build integration
package integration_test
import (
"testing"
"time"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/types"
cdv1alpha1 "github.com/grhili/cd-operator/api/v1alpha1"
"github.com/grhili/cd-operator/tests/integration/framework"
)
func TestMyFeature(t *testing.T) {
// Setup test cluster
crdPath, err := framework.GetCRDBasePath()
require.NoError(t, err)
cluster := framework.NewTestCluster(t, framework.TestClusterOptions{
CRDDirectoryPaths: []string{crdPath},
Timeout: 10 * time.Minute,
})
defer cluster.Teardown()
ctx := cluster.Context()
client := cluster.Client()
// Create namespace
namespace := framework.GenerateUniqueName("test-feature")
err = framework.CreateNamespace(t, ctx, client, namespace)
require.NoError(t, err)
defer framework.CleanupNamespace(t, ctx, client, namespace)
// Test implementation
fixtures := framework.NewFixtures()
tracker := fixtures.NewPullRequestTracker(namespace, "pr-1", 1)
err = framework.CreateAndWait(ctx, t, client, tracker, 30*time.Second)
require.NoError(t, err)
// Assertions
key := types.NamespacedName{Name: tracker.Name, Namespace: namespace}
framework.AssertResourceExists(t, ctx, client, key, &cdv1alpha1.PullRequestTracker{})
}
Best Practices
- Always use build tag: Add
//go:build integrationat the top - Unique namespaces: Use
GenerateUniqueNameto avoid conflicts - Cleanup resources: Use
deferfor cleanup operations - Reasonable timeouts: Default 10 minutes for suite, 30s for operations
- Clear test names: Use descriptive names like
TestFeature_SpecificScenario - Helper functions: Use framework helpers instead of raw client operations
- Verify cleanup: Ensure resources are deleted in defer statements
- Log progress: Use
t.Logffor debugging output
Troubleshooting
Tests Timeout
# Increase timeout
go test -v -tags=integration -timeout=20m ./tests/integration/...
# Run specific test with verbose output
go test -v -tags=integration ./tests/integration/ -run TestSpecificTest
CRD Installation Fails
# Verify CRD path
ls -la config/crd/bases/
# Check CRD validity
kubectl apply --dry-run=client -f config/crd/bases/
# Set KUBECONFIG explicitly
export KUBECONFIG="$(kind get kubeconfig --name cd-operator-test)"
Cluster Creation Fails
# Check kind installation
kind version
# Check available resources
docker stats
# Delete existing cluster
kind delete cluster --name cd-operator-test
# Create with verbose logging
kind create cluster --name cd-operator-test --verbosity=5
Test Flakiness
Common causes and solutions:
- Resource timing: Increase wait timeouts
- Shared state: Use unique namespaces per test
- Cleanup issues: Verify defer cleanup order
- Concurrency: Run tests serially with
-p 1
# Run serially
go test -v -tags=integration -p 1 ./tests/integration/...
# Debug specific test
go test -v -tags=integration ./tests/integration/ -run TestFlaky -count=10
Viewing Cluster Logs
# Get all pods
kubectl get pods --all-namespaces
# View pod logs
kubectl logs -n <namespace> <pod-name>
# Describe resource
kubectl describe pullrequesttracker -n <namespace> <name>
# Get events
kubectl get events -n <namespace> --sort-by='.lastTimestamp'
CI Integration
The integration tests are integrated into GitHub Actions:
Workflow Configuration
File: .github/workflows/integration.yml
- name: Run integration tests
run: |
export KUBECONFIG="$(kind get kubeconfig --name cd-operator-test)"
go test -v -tags=integration -timeout=10m ./tests/integration/...
Artifacts
On test failure, the workflow uploads:
- Cluster logs
- Resource dumps
- Event logs
- Pod descriptions
Access artifacts via GitHub Actions UI.
Performance
Typical Execution Times
- Single test: 5-10 seconds
- Full suite: 2-3 minutes
- With cluster creation: 5-7 minutes
Optimization Tips
- Reuse clusters: Use existing cluster for multiple tests
- Parallel execution: Run independent tests concurrently
- Shorter timeouts: Adjust based on actual operation time
- Selective runs: Use
-runflag for specific tests
# Parallel execution (default)
go test -v -tags=integration ./tests/integration/...
# Sequential execution (safer but slower)
go test -v -tags=integration -p 1 ./tests/integration/...
Coverage
Generate coverage reports:
# Generate coverage
go test -v -tags=integration -coverprofile=coverage.out ./tests/integration/...
# View coverage
go tool cover -html=coverage.out
# Coverage summary
go tool cover -func=coverage.out
Target: >80% coverage for integration test framework.
Contributing
When adding new integration tests:
- Follow the existing test structure
- Use framework utilities (don't reinvent)
- Add appropriate build tags
- Include cleanup in defer statements
- Document complex scenarios
- Update this README if adding new test suites
Support
For issues or questions:
- Check troubleshooting section above
- Review existing tests for examples
- File issue with
integration-testslabel - Include full test output and logs