testing

package
v0.0.74 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 27, 2025 License: Apache-2.0 Imports: 9 Imported by: 0

Documentation

Overview

Package testing provides testcontainers-based container setup for integration tests.

This package uses testcontainers-go to create ephemeral containers for testing purposes. Containers are automatically cleaned up after tests complete.

Key Features:

  • Ephemeral containers with automatic cleanup
  • Randomized port allocation to avoid conflicts
  • Wait strategies for service readiness
  • Integration test isolation

Build Tags:

Integration tests using this package should use the integration build tag:
//go:build integration

Example Usage:

func TestMyService(t *testing.T) {
    ctx := context.Background()
    baseXURL, cleanup, err := SetupBaseX(ctx, t)
    require.NoError(t, err)
    defer cleanup()
    // Use baseXURL for testing...
}

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type BaseXConfig

type BaseXConfig struct {
	// Image is the Docker image to use (default: "ghcr.io/quodatum/basexhttp:basex-12.0")
	Image string
	// AdminPassword is the BaseX admin password (default: "admin")
	AdminPassword string
	// StartupTimeout is the maximum time to wait for BaseX to be ready (default: 60s)
	StartupTimeout time.Duration
}

BaseXConfig holds configuration for BaseX testcontainer setup.

func DefaultBaseXConfig

func DefaultBaseXConfig() BaseXConfig

DefaultBaseXConfig returns the default BaseX configuration for testing.

type ContainerCleanup

type ContainerCleanup func()

ContainerCleanup is a function type for cleaning up test containers. Call this function in defer to ensure containers are terminated after tests.

func SetupBaseX

func SetupBaseX(ctx context.Context, t *testing.T, config *BaseXConfig) (string, ContainerCleanup, error)

SetupBaseX creates a BaseX container for integration testing.

BaseX is an XML database with XQuery support. This function starts a BaseX container using testcontainers-go and returns the REST API URL and a cleanup function.

Container Configuration:

  • Image: ghcr.io/quodatum/basexhttp:basex-12.0 (BaseX HTTP server image)
  • Port: 8984/tcp (BaseX REST API)
  • Admin Password: Configurable via BaseXConfig
  • Wait Strategy: HTTP readiness check on root endpoint

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional BaseX configuration (uses defaults if nil)

Returns:

  • string: BaseX REST API URL (e.g., "http://localhost:32768")
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestBaseXIntegration(t *testing.T) {
    ctx := context.Background()
    baseXURL, cleanup, err := SetupBaseX(ctx, t, nil)
    require.NoError(t, err)
    defer cleanup()

    // Use baseXURL to interact with BaseX REST API
    // Example: http://localhost:32768/rest
}

BaseX REST API Endpoints:

  • GET /{database} - List database resources
  • GET /{database}/{resource} - Retrieve resource
  • POST /{database} - Execute XQuery
  • PUT /{database}/{resource} - Create/update resource
  • DELETE /{database}/{resource} - Delete resource

Authentication:

BaseX uses HTTP Basic Authentication. Default credentials:
- Username: admin
- Password: admin (or custom via BaseXConfig.AdminPassword)

Cleanup:

Always defer the cleanup function to ensure the container is terminated:
defer cleanup()

Error Handling:

If container creation fails, the test should fail with require.NoError(t, err).
The cleanup function is safe to call even if setup fails (it's a no-op).

func SetupBaseXWithDatabase

func SetupBaseXWithDatabase(ctx context.Context, t *testing.T, config *BaseXConfig, databaseName string) (string, string, ContainerCleanup, error)

SetupBaseXWithDatabase creates a BaseX container and creates a test database.

This is a convenience function that combines SetupBaseX with database creation. Useful for tests that need a pre-existing database.

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional BaseX configuration (uses defaults if nil)
  • databaseName: Name of the database to create

Returns:

  • string: BaseX REST API URL
  • string: Database name (same as input for convenience)
  • ContainerCleanup: Function to terminate the container
  • error: Container creation, startup, or database creation errors

Example Usage:

func TestWithDatabase(t *testing.T) {
    ctx := context.Background()
    baseXURL, dbName, cleanup, err := SetupBaseXWithDatabase(ctx, t, nil, "testdb")
    require.NoError(t, err)
    defer cleanup()

    // Database "testdb" is already created and ready to use
}

Note: Database creation is performed via BaseX REST API. The database is empty initially and can be populated with documents.

func SetupCouchDB

func SetupCouchDB(ctx context.Context, t *testing.T, config *CouchDBConfig) (string, ContainerCleanup, error)

SetupCouchDB creates a CouchDB container for integration testing.

CouchDB is a document-oriented NoSQL database. This function starts a CouchDB container using testcontainers-go and returns the connection URL and a cleanup function.

Container Configuration:

  • Image: couchdb:3 (official Apache CouchDB image)
  • Port: 5984/tcp (CouchDB HTTP API)
  • Admin Credentials: Configurable via CouchDBConfig
  • Wait Strategy: HTTP readiness check on /_up endpoint

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional CouchDB configuration (uses defaults if nil)

Returns:

  • string: CouchDB connection URL with embedded credentials (e.g., "http://admin:admin@localhost:32769")
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestCouchDBIntegration(t *testing.T) {
    ctx := context.Background()
    couchURL, cleanup, err := SetupCouchDB(ctx, t, nil)
    require.NoError(t, err)
    defer cleanup()

    // Use couchURL to interact with CouchDB
    // Example: http://admin:admin@localhost:32769
}

CouchDB HTTP API Endpoints:

  • GET /_up - Health check
  • GET /_all_dbs - List all databases
  • PUT /{database} - Create database
  • GET /{database}/{doc_id} - Get document
  • PUT /{database}/{doc_id} - Create/update document
  • DELETE /{database}/{doc_id} - Delete document

Authentication:

CouchDB uses HTTP Basic Authentication. The returned URL includes
embedded credentials for convenience:
http://username:password@host:port

This format works with most CouchDB clients and allows direct use
without separate credential configuration.

Cleanup:

Always defer the cleanup function to ensure the container is terminated:
defer cleanup()

Single Node Mode:

The container runs in single-node mode which is appropriate for testing.
The setup process waits for the node to finish initialization before
returning, ensuring CouchDB is ready for database operations.

Error Handling:

If container creation fails, the test should fail with require.NoError(t, err).
The cleanup function is safe to call even if setup fails (it's a no-op).

func SetupCouchDBWithDatabase

func SetupCouchDBWithDatabase(ctx context.Context, t *testing.T, config *CouchDBConfig, databaseName string) (string, string, ContainerCleanup, error)

SetupCouchDBWithDatabase creates a CouchDB container and creates a test database.

This is a convenience function that combines SetupCouchDB with database creation. Useful for tests that need a pre-existing database.

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional CouchDB configuration (uses defaults if nil)
  • databaseName: Name of the database to create

Returns:

  • string: CouchDB connection URL with embedded credentials
  • string: Database name (same as input for convenience)
  • ContainerCleanup: Function to terminate the container
  • error: Container creation, startup, or database creation errors

Example Usage:

func TestWithDatabase(t *testing.T) {
    ctx := context.Background()
    couchURL, dbName, cleanup, err := SetupCouchDBWithDatabase(ctx, t, nil, "testdb")
    require.NoError(t, err)
    defer cleanup()

    // Database "testdb" is already created and ready to use
    // Access via: http://admin:admin@localhost:32769/testdb
}

Database Creation:

The database is created via HTTP PUT request to /{database}.
CouchDB will return 201 Created for successful creation or
412 Precondition Failed if the database already exists.

Note: Database creation requires HTTP client calls to CouchDB HTTP API. For now, we return the URL and database name. The calling test can create the database using the EVE CouchDB service or direct HTTP calls.

func SetupDockerStatsExporter

func SetupDockerStatsExporter(ctx context.Context, t *testing.T, config *DockerStatsExporterConfig) (string, ContainerCleanup, error)

SetupDockerStatsExporter creates a Docker Stats Exporter container for integration testing.

Docker Stats Exporter is a Prometheus exporter that exposes Docker container statistics (CPU, memory, network, disk I/O) in Prometheus format. This function starts a Docker Stats Exporter container using testcontainers-go and returns the metrics endpoint URL and a cleanup function.

Container Configuration:

  • Image: ghcr.io/grzegorzmika/docker_stats_exporter:latest (Docker container metrics exporter)
  • Port: 8080/tcp (Prometheus metrics endpoint)
  • Docker Socket: /var/run/docker.sock mounted read-only for container stats access
  • Wait Strategy: HTTP GET /metrics returning 200 OK

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional Docker Stats Exporter configuration (uses defaults if nil)

Returns:

  • string: Docker Stats Exporter metrics endpoint URL (e.g., "http://localhost:32793/metrics")
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestDockerStatsExporterIntegration(t *testing.T) {
    ctx := context.Background()
    metricsURL, cleanup, err := SetupDockerStatsExporter(ctx, t, nil)
    require.NoError(t, err)
    defer cleanup()

    // Use Docker Stats Exporter metrics endpoint
    resp, err := http.Get(metricsURL)
    require.NoError(t, err)
    defer resp.Body.Close()

    // Docker Stats Exporter is ready for scraping container metrics
}

Docker Stats Exporter Features:

Prometheus exporter for Docker container statistics:
- Real-time container CPU usage metrics
- Memory usage and limits per container
- Network I/O statistics (bytes sent/received)
- Disk I/O statistics (read/write operations)
- Container state and metadata
- Per-container resource consumption
- Prometheus exposition format
- Low overhead monitoring
- Label-based container filtering

Metrics Exposed:

Key metrics available at /metrics endpoint:
- container_cpu_usage_percent - CPU usage percentage per container
- container_memory_usage_bytes - Memory usage in bytes
- container_memory_limit_bytes - Memory limit in bytes
- container_network_receive_bytes_total - Network bytes received
- container_network_transmit_bytes_total - Network bytes transmitted
- container_block_read_bytes_total - Disk bytes read
- container_block_write_bytes_total - Disk bytes written
- container_state - Container running state (0=stopped, 1=running)
- container_info - Container metadata (name, image, labels)

Prometheus Configuration:

Configure Prometheus to scrape Docker Stats Exporter:
scrape_configs:
  - job_name: 'docker-stats'
    static_configs:
      - targets: ['localhost:8080']
    scrape_interval: 15s

Docker Socket Access:

The exporter requires read access to the Docker socket to collect container statistics:
- Mount: /var/run/docker.sock:/var/run/docker.sock:ro
- Read-only access for security
- Required for accessing Docker API
- Collects stats via Docker Engine API

Performance:

Docker Stats Exporter container starts in 5-15 seconds typically.
The wait strategy ensures the metrics endpoint is fully initialized and
ready to serve metrics before returning.

Use Cases:

Integration testing scenarios:
- Testing container monitoring infrastructure
- Testing Prometheus scraping of Docker metrics
- Testing container resource usage tracking
- Testing Grafana dashboards for Docker metrics
- Testing alerting rules based on container metrics
- Testing multi-container resource monitoring

Monitoring Docker Infrastructure:

Docker Stats Exporter enables monitoring of:
- Container resource utilization (CPU, memory, network, disk)
- Container health and availability
- Resource limits and throttling
- Network traffic patterns per container
- Disk I/O patterns and bottlenecks
- Container lifecycle events
- Microservices resource consumption

Security Considerations:

Docker socket access security:
- Read-only mount prevents container manipulation
- Exporter can only read container stats, not control containers
- No write access to Docker daemon
- Suitable for monitoring in production environments
- Consider using Docker API over TCP with TLS for remote monitoring
- Limit exporter to monitoring role only

Label-Based Filtering:

The exporter includes container labels in metrics, enabling:
- Filtering by container labels
- Grouping metrics by label selectors
- Service-specific monitoring
- Environment-based filtering (dev/staging/prod)
- Team or project-based metrics grouping

Integration with Grafana:

Create Grafana dashboards using Docker Stats Exporter metrics:
- Real-time container resource dashboards
- Per-container CPU and memory graphs
- Network traffic visualization
- Disk I/O monitoring panels
- Container state overview
- Resource usage trends and forecasting

Integration with Prometheus:

Query Docker container metrics using PromQL:
- rate(container_cpu_usage_percent[5m]) - CPU usage rate
- container_memory_usage_bytes / container_memory_limit_bytes - Memory utilization
- rate(container_network_receive_bytes_total[5m]) - Network receive rate
- sum(container_memory_usage_bytes) by (container_name) - Memory by container

Alerting:

Configure Prometheus alerts for container resources:
- High CPU usage per container
- Memory limit approaching or exceeded
- Container crashes or restarts
- Network anomalies
- Disk I/O bottlenecks
- Container state changes

Data Storage:

Docker Stats Exporter is stateless and does not store data.
All metrics are collected in real-time from the Docker daemon.
This ensures test isolation and no cleanup required.

Cleanup:

Always defer the cleanup function to ensure the container is terminated:
defer cleanup()

The cleanup function is safe to call even if setup fails (it's a no-op).

Error Handling:

If container creation fails, the test should fail with require.NoError(t, err).
Common errors:
- Docker daemon not running
- Image pull failures (network issues)
- Port conflicts (rare with random ports)
- Docker socket not accessible or not mounted
- Permission denied accessing Docker socket

Comparison with cAdvisor:

Docker Stats Exporter vs cAdvisor:
- Docker Stats Exporter: Lightweight, Docker-specific, simple setup
- cAdvisor: More comprehensive, supports multiple container runtimes, heavier
- Docker Stats Exporter: Focused on Docker containers only
- cAdvisor: Supports Docker, containerd, CRI-O, and more
- Choose Docker Stats Exporter for Docker-only environments
- Choose cAdvisor for multi-runtime Kubernetes clusters

Limitations:

Be aware of these limitations:
- Requires Docker socket access (security consideration)
- Only monitors Docker containers (not processes or host metrics)
- Metrics granularity limited by Docker stats API
- Historical data requires Prometheus long-term storage
- No built-in alerting (use Prometheus Alertmanager)

Best Practices:

For production monitoring:
- Use read-only Docker socket mount
- Configure appropriate scrape intervals (15-30s)
- Set up alerts for critical resource thresholds
- Monitor exporter health and availability
- Use persistent Prometheus storage
- Create comprehensive Grafana dashboards
- Document alert runbooks
- Test alerts and dashboards regularly

func SetupDragonflyDB

func SetupDragonflyDB(ctx context.Context, t *testing.T, config *DragonflyDBConfig) (string, ContainerCleanup, error)

SetupDragonflyDB creates a DragonflyDB container for integration testing.

DragonflyDB is a modern Redis-compatible in-memory data store. This function starts a DragonflyDB container using testcontainers-go and returns the connection address and a cleanup function.

Container Configuration:

  • Image: docker.dragonflydb.io/dragonflydb/dragonfly:v1.34.1
  • Port: 6379/tcp (Redis protocol compatible)
  • Wait Strategy: TCP connection check on port 6379
  • Memory Lock: Unlimited (required for optimal performance)

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional DragonflyDB configuration (uses defaults if nil)

Returns:

  • string: DragonflyDB connection address (e.g., "localhost:32770")
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestDragonflyDBIntegration(t *testing.T) {
    ctx := context.Background()
    dfdbAddr, cleanup, err := SetupDragonflyDB(ctx, t, nil)
    require.NoError(t, err)
    defer cleanup()

    // Connect to DragonflyDB using Redis client
    client := redis.NewClient(&redis.Options{
        Addr: dfdbAddr,
    })
    defer client.Close()

    // Use DragonflyDB
    err = client.Set(ctx, "key", "value", 0).Err()
    require.NoError(t, err)
}

Redis Compatibility:

DragonflyDB is fully compatible with Redis protocol and commands.
You can use any Redis client library to connect and interact with it.

Supported clients:
- Go: github.com/redis/go-redis/v9
- Python: redis-py
- Node.js: ioredis, redis
- Java: Jedis, Lettuce

Performance Features:

DragonflyDB provides significant performance improvements over Redis:
- Multi-threaded architecture
- Better memory efficiency
- Faster snapshot operations
- Optimized for modern hardware

Authentication:

Authentication is optional for testing. Set config.Password to enable:

config := DefaultDragonflyDBConfig()
config.Password = "secret"
dfdbAddr, cleanup, err := SetupDragonflyDB(ctx, t, &config)

Then connect with password:
client := redis.NewClient(&redis.Options{
    Addr:     dfdbAddr,
    Password: "secret",
})

Cleanup:

Always defer the cleanup function to ensure the container is terminated:
defer cleanup()

Error Handling:

If container creation fails, the test should fail with require.NoError(t, err).
The cleanup function is safe to call even if setup fails (it's a no-op).

func SetupFluentBit

func SetupFluentBit(ctx context.Context, t *testing.T, config *FluentBitConfig) (string, ContainerCleanup, error)

SetupFluentBit creates a Fluent Bit container for integration testing.

Fluent Bit is a lightweight and high-performance log processor and forwarder that allows you to collect logs from different sources, enrich them with filters, and send them to multiple destinations. This function starts a Fluent Bit container using testcontainers-go and returns the connection URL and a cleanup function.

Container Configuration:

  • Image: fluent/fluent-bit:4.0.13-amd64 (log processor and forwarder)
  • Port: 2020/tcp (HTTP monitoring API and metrics)
  • Wait Strategy: HTTP GET /api/v1/metrics/prometheus returning 200 OK

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional Fluent Bit configuration (uses defaults if nil)

Returns:

  • string: Fluent Bit HTTP monitoring endpoint URL (e.g., "http://localhost:32793")
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestFluentBitIntegration(t *testing.T) {
    ctx := context.Background()
    fluentbitURL, cleanup, err := SetupFluentBit(ctx, t, nil)
    require.NoError(t, err)
    defer cleanup()

    // Use Fluent Bit monitoring API
    resp, err := http.Get(fluentbitURL + "/api/v1/metrics/prometheus")
    require.NoError(t, err)
    defer resp.Body.Close()

    // Fluent Bit is ready for log processing
}

Fluent Bit Features:

Lightweight log processor and forwarder:
- Fast and lightweight (written in C)
- Low memory footprint (sub-megabyte)
- Data parsing and transformation
- Filtering and enrichment
- Buffering and reliability
- Multiple input sources
- Multiple output destinations
- Built-in metrics and monitoring
- Stream processing
- Kubernetes native integration

Input Plugins:

Fluent Bit supports many input sources:
- tail - Read from text files
- systemd - Read from systemd journal
- syslog - Syslog protocol server
- tcp - TCP protocol server
- forward - Fluentd forward protocol
- http - HTTP endpoints
- docker - Docker container logs
- kubernetes - Kubernetes pod logs
- mqtt - MQTT protocol
- serial - Serial interface
- stdin - Standard input

Parser Plugins:

Parse and structure log data:
- json - JSON format
- regex - Regular expressions
- ltsv - LTSV (Labeled Tab Separated Values)
- logfmt - Logfmt format
- docker - Docker JSON format
- syslog - Syslog format
- apache - Apache access logs
- nginx - Nginx access logs

Filter Plugins:

Transform and enrich log data:
- grep - Filter by pattern matching
- parser - Parse and structure data
- lua - Lua scripting for custom logic
- kubernetes - Enrich with Kubernetes metadata
- nest - Nest or lift fields
- modify - Modify records (add/remove/rename fields)
- record_modifier - Advanced record modification
- throttle - Throttle log throughput
- rewrite_tag - Dynamic tag routing
- geoip - GeoIP enrichment

Output Plugins:

Send logs to multiple destinations:
- stdout - Standard output
- forward - Forward to Fluentd
- http - HTTP endpoints
- elasticsearch - Elasticsearch
- opensearch - OpenSearch
- kafka - Apache Kafka
- prometheus - Prometheus metrics
- s3 - AWS S3
- cloudwatch - AWS CloudWatch Logs
- datadog - Datadog
- splunk - Splunk
- loki - Grafana Loki
- influxdb - InfluxDB
- tcp - TCP protocol
- null - Discard logs (testing)

Monitoring API Endpoints:

Key endpoints available on port 2020:
- GET /api/v1/metrics - Fluent Bit internal metrics (JSON)
- GET /api/v1/metrics/prometheus - Prometheus format metrics
- GET /api/v1/health - Health check endpoint
- GET /api/v1/uptime - Service uptime
- GET / - Service information

Configuration:

Fluent Bit uses a simple configuration format:
[SERVICE]
    Flush        5
    Daemon       Off
    Log_Level    info
    HTTP_Server  On
    HTTP_Listen  0.0.0.0
    HTTP_Port    2020

[INPUT]
    Name   tail
    Path   /var/log/app.log
    Parser json

[FILTER]
    Name   grep
    Match  *
    Regex  level (error|warning)

[OUTPUT]
    Name   stdout
    Match  *
    Format json_lines

Performance:

Fluent Bit container starts in 2-5 seconds typically.
The wait strategy ensures the HTTP API is fully initialized and
ready to accept requests before returning.

Performance characteristics:
- High throughput (tens of thousands of events/sec)
- Low latency (sub-millisecond processing)
- Minimal memory usage (450KB-5MB typical)
- Efficient CPU usage
- Async I/O and buffering

Data Pipeline:

Fluent Bit processes logs through a pipeline:
1. Input - Collect logs from sources
2. Parser - Parse and structure data (optional)
3. Filter - Transform and enrich data (optional)
4. Buffer - Buffer data for reliability
5. Output - Send to destinations

Buffering:

Fluent Bit provides buffering for reliability:
- Memory buffering (default, fast)
- Filesystem buffering (persistent)
- Backpressure handling
- Retry mechanisms
- Circuit breaker pattern

Data Format:

Fluent Bit internally uses a structured format:
- Tags - Route and classify logs
- Timestamp - Event time
- Record - Key-value pairs (the log data)

Cleanup:

Always defer the cleanup function to ensure the container is terminated:
defer cleanup()

The cleanup function is safe to call even if setup fails (it's a no-op).

Error Handling:

If container creation fails, the test should fail with require.NoError(t, err).
Common errors:
- Docker daemon not running
- Image pull failures (network issues)
- Port conflicts (rare with random ports)

Use Cases:

Integration testing scenarios:
- Testing log collection and forwarding
- Testing log parsing and transformation
- Testing filter logic
- Testing output plugin configurations
- Testing metrics collection
- Testing log routing
- Testing performance under load

func SetupFluentBitWithConfig

func SetupFluentBitWithConfig(ctx context.Context, t *testing.T, config *FluentBitConfig, configContent string) (string, ContainerCleanup, error)

SetupFluentBitWithConfig creates a Fluent Bit container with custom configuration.

This function allows you to provide a custom Fluent Bit configuration file content, which is useful for testing specific input/filter/output combinations.

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional Fluent Bit configuration (uses defaults if nil)
  • configContent: Fluent Bit configuration file content

Returns:

  • string: Fluent Bit HTTP monitoring endpoint URL
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestWithCustomConfig(t *testing.T) {
    ctx := context.Background()
    customConfig := `
[SERVICE]
    Flush        1
    Log_Level    debug
    HTTP_Server  On
    HTTP_Port    2020

[INPUT]
    Name   dummy
    Tag    test
    Dummy  {"message":"test log"}

[OUTPUT]
    Name   stdout
    Match  *
`
    fluentbitURL, cleanup, err := SetupFluentBitWithConfig(
        ctx, t, nil, customConfig)
    require.NoError(t, err)
    defer cleanup()

    // Fluent Bit is running with custom configuration
}

Configuration Examples:

Forward to Elasticsearch:
[INPUT]
    Name   tail
    Path   /var/log/*.log

[OUTPUT]
    Name   es
    Match  *
    Host   elasticsearch
    Port   9200

Parse JSON logs:
[INPUT]
    Name   tail
    Path   /var/log/app.log
    Parser json

[FILTER]
    Name   record_modifier
    Match  *
    Record hostname ${HOSTNAME}

[OUTPUT]
    Name   stdout
    Match  *

Use Cases:

  • Testing specific configuration scenarios
  • Testing custom parsers and filters
  • Testing complex routing rules
  • Testing output plugin settings

func SetupGrafana

func SetupGrafana(ctx context.Context, t *testing.T, config *GrafanaConfig) (string, ContainerCleanup, error)

SetupGrafana creates a Grafana container for integration testing.

Grafana is an open-source platform for monitoring and observability with beautiful dashboards. This function starts a Grafana container using testcontainers-go and returns the connection URL and a cleanup function.

Container Configuration:

  • Image: grafana/grafana:12.3.0-18893060694 (monitoring and dashboards)
  • Port: 3000/tcp (HTTP UI and API)
  • Admin credentials: admin/admin (default)
  • Wait Strategy: HTTP GET /api/health returning 200 OK

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional Grafana configuration (uses defaults if nil)

Returns:

  • string: Grafana HTTP endpoint URL (e.g., "http://localhost:32791")
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestGrafanaIntegration(t *testing.T) {
    ctx := context.Background()
    grafanaURL, cleanup, err := SetupGrafana(ctx, t, nil)
    require.NoError(t, err)
    defer cleanup()

    // Use Grafana API
    resp, err := http.Get(grafanaURL + "/api/health")
    require.NoError(t, err)
    defer resp.Body.Close()

    // Grafana is ready for creating dashboards
}

Grafana Features:

Open-source monitoring and observability platform:
- Beautiful, customizable dashboards
- Multiple data source support (Prometheus, Loki, etc.)
- Alerting and notifications
- User management and authentication
- Plugin ecosystem
- Query builder and variables
- Annotations and events
- Dashboard sharing and embedding

API Endpoints:

Key endpoints available:
- GET  /api/health - Health check
- GET  /api/datasources - List data sources
- POST /api/datasources - Create data source
- GET  /api/dashboards/db/:slug - Get dashboard
- POST /api/dashboards/db - Create/update dashboard
- GET  /api/search - Search dashboards
- POST /api/annotations - Create annotation
- GET  /api/org - Get current organization
- GET  /api/admin/stats - Get server statistics

Authentication:

For testing, basic authentication is configured:
- Username: admin (configurable via config.AdminUser)
- Password: admin (configurable via config.AdminPassword)

Use Basic Auth in API requests:
curl -u admin:admin http://localhost:3000/api/health

Data Sources:

Grafana supports many data sources:
- Prometheus - Metrics
- Loki - Logs
- Tempo - Traces
- PostgreSQL - SQL database
- InfluxDB - Time series
- Elasticsearch - Full-text search
- Graphite - Metrics
- CloudWatch - AWS monitoring

Performance:

Grafana container starts in 5-15 seconds typically.
The wait strategy ensures the API is fully initialized and
ready to accept requests before returning.

Data Storage:

Grafana stores data in /var/lib/grafana inside the container.
For testing, this is ephemeral (lost when container stops).
This ensures test isolation.

Dashboards:

Create dashboards via:
- Grafana UI at http://localhost:3000
- HTTP API (POST /api/dashboards/db)
- Provisioning (JSON files)

Cleanup:

Always defer the cleanup function to ensure the container is terminated:
defer cleanup()

The cleanup function is safe to call even if setup fails (it's a no-op).

Error Handling:

If container creation fails, the test should fail with require.NoError(t, err).
Common errors:
- Docker daemon not running
- Image pull failures (network issues)
- Port conflicts (rare with random ports)

Use Cases:

Integration testing scenarios:
- Testing dashboard creation and rendering
- Testing data source connections
- Testing alerting rules
- Testing Grafana plugins
- Testing authentication flows
- Testing dashboard provisioning

func SetupGrafanaWithDataSource

func SetupGrafanaWithDataSource(ctx context.Context, t *testing.T, config *GrafanaConfig, dataSourceURL, dataSourceType string) (string, ContainerCleanup, error)

SetupGrafanaWithDataSource creates a Grafana container and configures a data source.

This is a convenience function that combines SetupGrafana with data source configuration. Useful for tests that need a ready-to-use data source.

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional Grafana configuration (uses defaults if nil)
  • dataSourceURL: URL of the data source (e.g., Prometheus endpoint)
  • dataSourceType: Type of data source (e.g., "prometheus", "loki")

Returns:

  • string: Grafana HTTP endpoint URL
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or data source configuration errors

Example Usage:

func TestWithDataSource(t *testing.T) {
    ctx := context.Background()
    grafanaURL, cleanup, err := SetupGrafanaWithDataSource(
        ctx, t, nil, "http://prometheus:9090", "prometheus")
    require.NoError(t, err)
    defer cleanup()

    // Grafana is ready with Prometheus data source configured
}

Data Source Configuration:

The data source is configured via Grafana HTTP API:
POST /api/datasources
Content-Type: application/json

Note: Data source creation requires HTTP calls to Grafana API.
For now, we return the connection URL pattern.
The calling test should create the data source using the Grafana API.

Use Cases:

  • Testing with pre-configured data source
  • Multi-source testing
  • Testing dashboard queries
  • Testing data source plugins

func SetupGraphDB

func SetupGraphDB(ctx context.Context, t *testing.T, config *GraphDBConfig) (string, ContainerCleanup, error)

SetupGraphDB creates a GraphDB container for integration testing.

GraphDB is a semantic graph database (RDF triple store) from Ontotext. This function starts a GraphDB container using testcontainers-go and returns the connection URL and a cleanup function.

Container Configuration:

  • Image: ontotext/graphdb:10.8.1 (semantic graph database)
  • Port: 7200/tcp (HTTP REST API and Workbench UI)
  • Memory: Configurable via JavaOpts (default: 1GB min, 2GB max)
  • Wait Strategy: HTTP GET /protocol returning 200 OK

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional GraphDB configuration (uses defaults if nil)

Returns:

  • string: GraphDB HTTP endpoint URL (e.g., "http://localhost:32780")
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestGraphDBIntegration(t *testing.T) {
    ctx := context.Background()
    graphdbURL, cleanup, err := SetupGraphDB(ctx, t, nil)
    require.NoError(t, err)
    defer cleanup()

    // Use GraphDB REST API
    resp, err := http.Get(graphdbURL + "/repositories")
    require.NoError(t, err)
    defer resp.Body.Close()

    // GraphDB is ready for RDF/SPARQL operations
}

GraphDB Features:

RDF triple store with SPARQL 1.1 query support:
- RDF storage and retrieval
- SPARQL 1.1 Query and Update
- Reasoning and inference (RDFS, OWL)
- Full-text search
- GeoSPARQL support
- REST API for repository management
- Workbench UI for visual management

REST API Endpoints:

Key endpoints available:
- GET  /repositories - List all repositories
- POST /repositories - Create repository
- GET  /repositories/{id}/statements - Query triples
- POST /repositories/{id}/statements - Add triples
- POST /repositories/{id} - SPARQL query endpoint
- GET  /rest/locations - List data locations

Workbench UI:

Access the GraphDB Workbench web interface:
URL: {graphdbURL}
Default: No authentication required for test container

Features:
- Visual SPARQL query editor
- Repository management
- Import/export data
- Explore graphs visually
- Monitor query performance

Memory Configuration:

GraphDB is a Java application requiring JVM memory tuning:
- Default: -Xms1g -Xmx2g (1GB min, 2GB max)
- For large datasets: -Xms2g -Xmx4g
- Adjust via config.JavaOpts

Performance:

GraphDB container starts in 20-40 seconds typically.
The wait strategy ensures the REST API is fully initialized and
ready to accept requests before returning.

Data Formats:

GraphDB supports various RDF serialization formats:
- Turtle (.ttl)
- RDF/XML (.rdf)
- N-Triples (.nt)
- N-Quads (.nq)
- JSON-LD (.jsonld)
- TriG (.trig)

SPARQL Support:

Full SPARQL 1.1 support including:
- SELECT, CONSTRUCT, ASK, DESCRIBE queries
- INSERT, DELETE, LOAD, CLEAR updates
- FILTER, OPTIONAL, UNION operators
- Aggregation functions (COUNT, SUM, AVG, etc.)
- Subqueries and property paths
- Named graphs

Reasoning:

GraphDB supports various reasoning profiles:
- RDFS (RDF Schema reasoning)
- OWL-Horst (OWL subset)
- OWL-Max (extended OWL reasoning)
- Custom rulesets

Cleanup:

Always defer the cleanup function to ensure the container is terminated:
defer cleanup()

The cleanup function is safe to call even if setup fails (it's a no-op).

Data Persistence:

Test containers are ephemeral - data is lost when the container stops.
This is intentional for test isolation. Each test gets a clean database.

Error Handling:

If container creation fails, the test should fail with require.NoError(t, err).
Common errors:
- Docker daemon not running
- Image pull failures (network issues)
- Port conflicts (rare with random ports)
- Insufficient memory for GraphDB (requires ~1GB minimum)

func SetupGraphDBWithRepository

func SetupGraphDBWithRepository(ctx context.Context, t *testing.T, config *GraphDBConfig, repositoryID, repositoryLabel string) (string, string, ContainerCleanup, error)

SetupGraphDBWithRepository creates a GraphDB container and creates a test repository.

This is a convenience function that combines SetupGraphDB with repository creation. Useful for tests that need a ready-to-use repository.

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional GraphDB configuration (uses defaults if nil)
  • repositoryID: ID of the repository to create
  • repositoryLabel: Human-readable label for the repository

Returns:

  • string: GraphDB HTTP endpoint URL
  • string: Repository ID (same as input for convenience)
  • ContainerCleanup: Function to terminate the container
  • error: Container creation, startup, or repository creation errors

Example Usage:

func TestWithRepository(t *testing.T) {
    ctx := context.Background()
    graphdbURL, repoID, cleanup, err := SetupGraphDBWithRepository(
        ctx, t, nil, "test-repo", "Test Repository")
    require.NoError(t, err)
    defer cleanup()

    // Repository "test-repo" is ready to use
    sparqlEndpoint := fmt.Sprintf("%s/repositories/%s", graphdbURL, repoID)
}

Repository Creation:

The repository is created via GraphDB REST API:
POST /rest/repositories
Content-Type: application/json

Note: Repository creation requires HTTP calls to GraphDB REST API.
For now, we return the connection URL pattern.
The calling test should create the repository using the GraphDB API.

Repository Types:

GraphDB supports various repository types:
- Free (no reasoning)
- RDFS (RDF Schema reasoning)
- OWL-Horst (OWL subset reasoning)
- OWL-Max (extended OWL reasoning)

Use Cases:

  • Testing with pre-configured repository
  • Multi-repository testing
  • Testing repository-specific features
  • Isolating test data in separate repositories

func SetupLakeFS

func SetupLakeFS(ctx context.Context, t *testing.T, config *LakeFSConfig) (string, ContainerCleanup, error)

SetupLakeFS creates a LakeFS container for integration testing.

LakeFS is an open-source platform that brings Git-like version control to data lakes, enabling branches, commits, merges, and rollbacks for object storage. This function starts a LakeFS container using testcontainers-go and returns the connection URL and a cleanup function.

Container Configuration:

  • Image: treeverse/lakefs:1.70 (data lake versioning)
  • Port: 8000/tcp (HTTP API and UI)
  • Mode: Local mode (built-in KV store, no external DB required)
  • Wait Strategy: HTTP GET / returning 200 OK

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional LakeFS configuration (uses defaults if nil)

Returns:

  • string: LakeFS HTTP endpoint URL (e.g., "http://localhost:32793")
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestLakeFSIntegration(t *testing.T) {
    ctx := context.Background()
    lakeFSURL, cleanup, err := SetupLakeFS(ctx, t, nil)
    require.NoError(t, err)
    defer cleanup()

    // Use LakeFS API
    resp, err := http.Get(lakeFSURL + "/api/v1/healthcheck")
    require.NoError(t, err)
    defer resp.Body.Close()

    // LakeFS is ready for data versioning operations
}

LakeFS Features:

Git-like data versioning platform:
- Branching and merging for data lakes
- Atomic commits for data changes
- Time travel and rollback capabilities
- CI/CD for data pipelines
- Data quality gates
- Isolated development environments
- Zero-copy branching (metadata only)
- S3-compatible API
- Support for multiple storage backends

Storage Backends:

LakeFS supports various object storage backends:
- Amazon S3
- Azure Blob Storage
- Google Cloud Storage
- MinIO
- Local filesystem (for testing)

API Endpoints:

Key HTTP API endpoints:
- GET  /api/v1/healthcheck - Health check
- GET  /api/v1/repositories - List repositories
- POST /api/v1/repositories - Create repository
- GET  /api/v1/repositories/{repository}/branches - List branches
- POST /api/v1/repositories/{repository}/branches - Create branch
- POST /api/v1/repositories/{repository}/branches/{branch}/commits - Commit changes
- POST /api/v1/repositories/{repository}/refs/{branch}/merge - Merge branches
- GET  /api/v1/repositories/{repository}/refs/{ref}/objects - List objects

Web UI:

LakeFS provides a web interface for:
- Repository management
- Branch visualization
- Commit history
- Object browsing
- Diff viewing
- User management

S3 Gateway:

LakeFS provides S3-compatible endpoints:
- Accessible at the same port (8000)
- Use repository/branch as bucket name: {repo}/{branch}
- Example: s3://my-repo/main/path/to/object

Authentication:

For testing, LakeFS can run without authentication in local mode.
Default credentials for production mode:
- Access Key ID: generated on first run
- Secret Access Key: generated on first run

Local Mode:

For testing, LakeFS runs in local mode:
- Built-in KV store (no PostgreSQL required)
- Local block storage
- Single node deployment
- Ephemeral data (lost on container restart)

Performance:

LakeFS container starts in 30-60 seconds typically.
The wait strategy ensures the API is fully initialized and
ready to accept requests before returning.

Data Storage:

LakeFS stores data in /data inside the container.
For testing, this is ephemeral (lost when container stops).
This ensures test isolation.

Use Cases:

Data versioning scenarios:
- Testing ETL pipeline versions
- Data quality validation
- Rollback data changes
- Isolated data environments
- Data CI/CD workflows
- Reproducible data science
- Compliance and auditing

Cleanup:

Always defer the cleanup function to ensure the container is terminated:
defer cleanup()

The cleanup function is safe to call even if setup fails (it's a no-op).

Error Handling:

If container creation fails, the test should fail with require.NoError(t, err).
Common errors:
- Docker daemon not running
- Image pull failures (network issues)
- Port conflicts (rare with random ports)
- Insufficient memory (LakeFS requires ~512MB minimum)

Git-like Operations:

LakeFS provides familiar Git operations:
- Branch: Create isolated data environments
- Commit: Create immutable snapshots
- Merge: Integrate changes from branches
- Revert: Roll back to previous versions
- Tag: Mark specific data versions
- Diff: Compare data between branches/commits

func SetupLakeFSWithRepository

func SetupLakeFSWithRepository(ctx context.Context, t *testing.T, config *LakeFSConfig, repoName, defaultBranch string) (string, string, ContainerCleanup, error)

SetupLakeFSWithRepository creates a LakeFS container and creates a test repository.

This is a convenience function that combines SetupLakeFS with repository creation. Useful for tests that need a ready-to-use repository.

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional LakeFS configuration (uses defaults if nil)
  • repoName: Name of the repository to create
  • defaultBranch: Default branch name (e.g., "main")

Returns:

  • string: LakeFS HTTP endpoint URL
  • string: Repository name (same as input for convenience)
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or repository creation errors

Example Usage:

func TestWithRepository(t *testing.T) {
    ctx := context.Background()
    lakeFSURL, repoName, cleanup, err := SetupLakeFSWithRepository(
        ctx, t, nil, "my-repo", "main")
    require.NoError(t, err)
    defer cleanup()

    // Repository "my-repo" with branch "main" is ready to use
}

Repository Creation:

The repository is created via LakeFS HTTP API:
POST /api/v1/repositories
Content-Type: application/json

Note: Repository creation requires HTTP calls to LakeFS API.
For now, we return the connection URL pattern.
The calling test should create the repository using the LakeFS API.

Use Cases:

  • Testing with pre-configured repository
  • Multi-repository testing
  • Testing repository-specific features
  • Isolating test data in separate repositories

func SetupMimir

func SetupMimir(ctx context.Context, t *testing.T, config *MimirConfig) (string, ContainerCleanup, error)

SetupMimir creates a Grafana Mimir container for integration testing.

Grafana Mimir is an open-source, horizontally scalable, highly available, multi-tenant, long-term storage for Prometheus metrics. This function starts a Mimir container using testcontainers-go and returns the connection URL and a cleanup function.

Container Configuration:

  • Image: grafana/mimir:2.17.2 (metrics long-term storage)
  • Port: 9009/tcp (HTTP API)
  • Mode: Monolithic (single binary, all components)
  • Wait Strategy: HTTP GET /ready returning 200 OK

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional Mimir configuration (uses defaults if nil)

Returns:

  • string: Mimir HTTP endpoint URL (e.g., "http://localhost:32792")
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestMimirIntegration(t *testing.T) {
    ctx := context.Background()
    mimirURL, cleanup, err := SetupMimir(ctx, t, nil)
    require.NoError(t, err)
    defer cleanup()

    // Use Mimir API
    resp, err := http.Get(mimirURL + "/ready")
    require.NoError(t, err)
    defer resp.Body.Close()

    // Mimir is ready for ingesting metrics
}

Grafana Mimir Features:

Open-source metrics storage with:
- Horizontally scalable architecture
- Multi-tenancy support
- Long-term storage for Prometheus metrics
- PromQL query engine
- High availability
- Object storage backend (S3, GCS, Azure Blob)
- Recording and alerting rules
- Grafana integration
- Prometheus remote_write API

API Endpoints:

Key endpoints available:
- GET  /ready - Readiness check
- GET  /metrics - Prometheus metrics
- POST /api/v1/push - Push metrics (remote_write)
- GET  /prometheus/api/v1/query - Query metrics (PromQL)
- GET  /prometheus/api/v1/query_range - Range query
- GET  /prometheus/api/v1/series - Series metadata
- GET  /prometheus/api/v1/labels - Label names
- GET  /prometheus/api/v1/label/{name}/values - Label values
- POST /prometheus/api/v1/rules - Configure recording/alerting rules

Prometheus Configuration:

Configure Prometheus to remote_write to Mimir:
remote_write:
  - url: http://localhost:9009/api/v1/push
    headers:
      X-Scope-OrgID: "demo"  # Tenant ID for multi-tenancy

Multi-Tenancy:

Mimir uses HTTP headers for tenant isolation:
X-Scope-OrgID: tenant-id

For testing, use "demo" or "anonymous" as tenant ID.
All requests must include this header.

Performance:

Mimir container starts in 30-60 seconds typically.
The wait strategy ensures the API is fully initialized and
ready to accept requests before returning.

Data Storage:

Mimir stores data in /data inside the container.
For testing, this is ephemeral (lost when container stops).
This ensures test isolation.

Monolithic Mode:

For testing, Mimir runs in monolithic mode where all components
(distributor, ingester, querier, etc.) run in a single process.

For production, use microservices mode with separate components.

Cleanup:

Always defer the cleanup function to ensure the container is terminated:
defer cleanup()

The cleanup function is safe to call even if setup fails (it's a no-op).

Error Handling:

If container creation fails, the test should fail with require.NoError(t, err).
Common errors:
- Docker daemon not running
- Image pull failures (network issues)
- Port conflicts (rare with random ports)
- Insufficient memory (Mimir requires ~512MB minimum)

Use Cases:

Integration testing scenarios:
- Testing Prometheus remote_write integration
- Testing PromQL queries
- Testing long-term metrics storage
- Testing alerting rules
- Testing Grafana data source connections
- Testing multi-tenant scenarios

func SetupMimirWithTenant

func SetupMimirWithTenant(ctx context.Context, t *testing.T, config *MimirConfig, tenantID string) (string, string, ContainerCleanup, error)

SetupMimirWithTenant creates a Mimir container and returns URLs for a specific tenant.

This is a convenience function that formats tenant-specific URLs.

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional Mimir configuration (uses defaults if nil)
  • tenantID: Tenant ID for multi-tenancy (e.g., "demo", "tenant-1")

Returns:

  • string: Mimir HTTP endpoint URL
  • string: Tenant ID (same as input for convenience)
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestWithTenant(t *testing.T) {
    ctx := context.Background()
    mimirURL, tenantID, cleanup, err := SetupMimirWithTenant(
        ctx, t, nil, "demo")
    require.NoError(t, err)
    defer cleanup()

    // All requests should include X-Scope-OrgID: demo header
    req, _ := http.NewRequest("GET", mimirURL+"/ready", nil)
    req.Header.Set("X-Scope-OrgID", tenantID)
}

Tenant Configuration:

When making requests to Mimir, always include the tenant header:
X-Scope-OrgID: {tenantID}

This is required for multi-tenancy isolation.

func SetupOTelCollector

func SetupOTelCollector(ctx context.Context, t *testing.T, config *OTelCollectorConfig) (string, string, ContainerCleanup, error)

SetupOTelCollector creates an OpenTelemetry Collector container for integration testing.

OpenTelemetry Collector is a vendor-agnostic agent for receiving, processing, and exporting telemetry data (traces, metrics, and logs). This function starts an OTel Collector container using testcontainers-go and returns the connection URLs and a cleanup function.

Container Configuration:

  • Image: otel/opentelemetry-collector:nightly (latest OpenTelemetry Collector)
  • Port: 4318/tcp (OTLP HTTP receiver)
  • Port: 4317/tcp (OTLP gRPC receiver)
  • Port: 13133/tcp (health check extension)
  • Wait Strategy: HTTP GET / returning 200 OK on port 13133

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional OTel Collector configuration (uses defaults if nil)

Returns:

  • string: OTel Collector HTTP endpoint URL (OTLP HTTP receiver) (e.g., "http://localhost:32793")
  • string: OTel Collector gRPC endpoint URL (OTLP gRPC receiver) (e.g., "localhost:32794")
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestOTelCollectorIntegration(t *testing.T) {
    ctx := context.Background()
    httpURL, grpcURL, cleanup, err := SetupOTelCollector(ctx, t, nil)
    require.NoError(t, err)
    defer cleanup()

    // Use OTel Collector endpoints
    // Send traces via HTTP: POST to httpURL + "/v1/traces"
    // Send metrics via gRPC: connect to grpcURL
}

OpenTelemetry Collector Features:

Vendor-agnostic telemetry collection and processing:
- Receive telemetry data in multiple formats (OTLP, Jaeger, Zipkin, Prometheus)
- Process telemetry data (filtering, sampling, batching, enrichment)
- Export telemetry data to multiple backends (Jaeger, Zipkin, Prometheus, etc.)
- Support for traces, metrics, and logs
- Pluggable architecture with receivers, processors, and exporters
- Configuration via YAML
- Low resource consumption
- High performance and scalability

OTLP Protocol:

OpenTelemetry Protocol (OTLP) is the native protocol:
- OTLP/HTTP: HTTP/1.1 with JSON or Protobuf
- OTLP/gRPC: gRPC with Protobuf

OTLP supports three signal types:
- Traces: Distributed tracing data
- Metrics: Time-series metrics
- Logs: Structured log data

Receivers:

The collector can receive telemetry from various sources:
- OTLP receiver: Native OpenTelemetry protocol (HTTP/gRPC)
- Jaeger receiver: Jaeger format traces
- Zipkin receiver: Zipkin format traces
- Prometheus receiver: Scrapes Prometheus metrics
- Host metrics receiver: System metrics
- Kubernetes receiver: Kubernetes cluster metrics
- File receiver: Read from log files
- Syslog receiver: Syslog protocol

Processors:

Process telemetry data before exporting:
- Batch processor: Batches telemetry for efficiency
- Memory limiter: Prevents OOM by limiting memory usage
- Resource processor: Add/modify resource attributes
- Attributes processor: Add/modify span attributes
- Filter processor: Filter telemetry based on conditions
- Probabilistic sampler: Sample traces based on probability
- Span processor: Modify span properties
- Tail sampling: Sample based on complete trace
- Transform processor: Transform telemetry data

Exporters:

Export telemetry to various backends:
- OTLP exporter: Send to OTLP-compatible backends
- Jaeger exporter: Send traces to Jaeger
- Zipkin exporter: Send traces to Zipkin
- Prometheus exporter: Expose metrics for Prometheus scraping
- Logging exporter: Log telemetry (for debugging)
- File exporter: Write to files
- Kafka exporter: Send to Kafka
- OpenSearch exporter: Send to OpenSearch
- Loki exporter: Send logs to Loki

API Endpoints:

Key HTTP endpoints available:
- POST /v1/traces - Receive trace data (OTLP/HTTP)
- POST /v1/metrics - Receive metrics data (OTLP/HTTP)
- POST /v1/logs - Receive logs data (OTLP/HTTP)
- GET  / - Health check (on port 13133)
- GET  /metrics - Collector's own metrics (Prometheus format)

gRPC Services:

Key gRPC services available:
- opentelemetry.proto.collector.trace.v1.TraceService/Export
- opentelemetry.proto.collector.metrics.v1.MetricsService/Export
- opentelemetry.proto.collector.logs.v1.LogsService/Export

Configuration:

The default configuration includes:
- OTLP receivers (HTTP on 4318, gRPC on 4317)
- Batch processor for efficiency
- Logging exporter for debugging

For custom configuration, use SetupOTelCollectorWithConfig.

Performance:

OTel Collector container starts in 5-15 seconds typically.
The wait strategy ensures the health check endpoint is ready
before returning.

Data Storage:

The default configuration exports telemetry to stdout (logging exporter).
For testing, this is ephemeral (lost when container stops).
This ensures test isolation.

Sending Telemetry:

Send traces via HTTP:
POST http://localhost:{httpPort}/v1/traces
Content-Type: application/json or application/x-protobuf

Send metrics via HTTP:
POST http://localhost:{httpPort}/v1/metrics
Content-Type: application/json or application/x-protobuf

Send logs via HTTP:
POST http://localhost:{httpPort}/v1/logs
Content-Type: application/json or application/x-protobuf

Send via gRPC:
Connect to grpcURL and call the appropriate service method.

Cleanup:

Always defer the cleanup function to ensure the container is terminated:
defer cleanup()

The cleanup function is safe to call even if setup fails (it's a no-op).

Error Handling:

If container creation fails, the test should fail with require.NoError(t, err).
Common errors:
- Docker daemon not running
- Image pull failures (network issues)
- Port conflicts (rare with random ports)

Use Cases:

Integration testing scenarios:
- Testing OpenTelemetry instrumentation
- Testing trace collection and export
- Testing metrics collection and export
- Testing log collection and export
- Testing custom processors and exporters
- Testing collector configuration
- Testing multi-signal pipelines
- Testing sampling strategies

func SetupOTelCollectorWithConfig

func SetupOTelCollectorWithConfig(ctx context.Context, t *testing.T, config *OTelCollectorConfig, configContent string) (string, string, ContainerCleanup, error)

SetupOTelCollectorWithConfig creates an OTel Collector container with custom configuration.

This function allows you to provide a custom collector configuration file. Useful for testing specific receiver, processor, or exporter configurations.

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional OTel Collector configuration (uses defaults if nil)
  • configContent: YAML configuration content for the collector

Returns:

  • string: OTel Collector HTTP endpoint URL
  • string: OTel Collector gRPC endpoint URL
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

configYAML := `
receivers:
  otlp:
    protocols:
      http:
        endpoint: 0.0.0.0:4318
      grpc:
        endpoint: 0.0.0.0:4317
processors:
  batch:
    timeout: 1s
    send_batch_size: 1024
exporters:
  logging:
    loglevel: debug
service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [logging]
`

httpURL, grpcURL, cleanup, err := SetupOTelCollectorWithConfig(
    ctx, t, nil, configYAML)
require.NoError(t, err)
defer cleanup()

Configuration Format:

The configuration must be valid YAML following the OTel Collector schema:
- receivers: Define how to receive telemetry
- processors: Define how to process telemetry
- exporters: Define where to send telemetry
- service.pipelines: Define signal pipelines
- extensions: Optional extensions (health check, pprof, etc.)

Custom Receivers:

Configure custom receivers in the configuration:
receivers:
  jaeger:
    protocols:
      thrift_http:
        endpoint: 0.0.0.0:14268
  prometheus:
    config:
      scrape_configs:
        - job_name: 'otel-collector'
          scrape_interval: 10s

Custom Processors:

Configure custom processors:
processors:
  attributes:
    actions:
      - key: environment
        value: testing
        action: insert
  probabilistic_sampler:
    sampling_percentage: 10

Custom Exporters:

Configure custom exporters:
exporters:
  otlp/jaeger:
    endpoint: jaeger:4317
    tls:
      insecure: true
  prometheus:
    endpoint: 0.0.0.0:8889

Pipelines:

Define signal pipelines:
service:
  pipelines:
    traces:
      receivers: [otlp, jaeger]
      processors: [batch, attributes]
      exporters: [otlp/jaeger, logging]
    metrics:
      receivers: [otlp, prometheus]
      processors: [batch]
      exporters: [prometheus, logging]
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [logging]

func SetupOpenSearch

func SetupOpenSearch(ctx context.Context, t *testing.T, config *OpenSearchConfig) (string, ContainerCleanup, error)

SetupOpenSearch creates an OpenSearch container for integration testing.

OpenSearch is a community-driven, open-source search and analytics suite. This function starts an OpenSearch container using testcontainers-go and returns the connection URL and a cleanup function.

Container Configuration:

  • Image: opensearchproject/opensearch:3.0.0 (search and analytics engine)
  • Port: 9200/tcp (HTTP REST API)
  • Port: 9600/tcp (Performance Analyzer)
  • Memory: Configurable via JavaOpts (default: 512MB min/max)
  • Security: Disabled by default for testing
  • Wait Strategy: HTTP GET / returning 200 OK

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional OpenSearch configuration (uses defaults if nil)

Returns:

  • string: OpenSearch HTTP endpoint URL (e.g., "http://localhost:32800")
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestOpenSearchIntegration(t *testing.T) {
    ctx := context.Background()
    opensearchURL, cleanup, err := SetupOpenSearch(ctx, t, nil)
    require.NoError(t, err)
    defer cleanup()

    // Use OpenSearch REST API
    resp, err := http.Get(opensearchURL + "/_cluster/health")
    require.NoError(t, err)
    defer resp.Body.Close()

    // OpenSearch is ready for indexing and searching
}

OpenSearch Features:

Open-source search and analytics engine:
- Full-text search with Lucene
- Real-time indexing
- Distributed architecture
- RESTful API
- JSON document storage
- Aggregations and analytics
- Machine learning capabilities
- Alerting and notifications
- SQL query support

REST API Endpoints:

Key endpoints available:
- GET  / - Cluster information
- GET  /_cluster/health - Cluster health
- GET  /_cat/indices - List indices
- PUT  /{index} - Create index
- POST /{index}/_doc - Index document
- GET  /{index}/_search - Search documents
- POST /{index}/_search - Complex search queries
- DELETE /{index} - Delete index

Security:

For testing, security is disabled by default:
- No authentication required
- No TLS/SSL
- Open access to all APIs

For production, enable security plugin:
- Authentication (basic, JWT, SAML, etc.)
- TLS/SSL encryption
- Role-based access control (RBAC)
- Audit logging

Memory Configuration:

OpenSearch is a Java application requiring JVM memory tuning:
- Default: -Xms512m -Xmx512m (512MB)
- For larger datasets: -Xms1g -Xmx1g or higher
- Adjust via config.JavaOpts
- Rule of thumb: Set Xms == Xmx to avoid heap resizing

Performance:

OpenSearch container starts in 30-60 seconds typically.
The wait strategy ensures the REST API is fully initialized and
ready to accept requests before returning.

Data Storage:

OpenSearch stores data in /usr/share/opensearch/data.
For testing, this is ephemeral (lost when container stops).
This ensures test isolation.

Cleanup:

Always defer the cleanup function to ensure the container is terminated:
defer cleanup()

The cleanup function is safe to call even if setup fails (it's a no-op).

Error Handling:

If container creation fails, the test should fail with require.NoError(t, err).
Common errors:
- Docker daemon not running
- Image pull failures (network issues)
- Port conflicts (rare with random ports)
- Insufficient memory for OpenSearch (requires ~512MB minimum)

func SetupOpenSearchDashboards

func SetupOpenSearchDashboards(ctx context.Context, t *testing.T, config *OpenSearchDashboardsConfig) (string, ContainerCleanup, error)

SetupOpenSearchDashboards creates an OpenSearch Dashboards container for integration testing.

OpenSearch Dashboards is the visualization and user interface for OpenSearch. This function starts an OpenSearch Dashboards container using testcontainers-go and returns the connection URL and a cleanup function.

Container Configuration:

  • Image: opensearchproject/opensearch-dashboards:3.0.0 (visualization UI)
  • Port: 5601/tcp (HTTP UI)
  • Connection: Links to OpenSearch instance
  • Security: Disabled by default for testing
  • Wait Strategy: HTTP GET /api/status returning 200 OK

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional OpenSearch Dashboards configuration (uses defaults if nil)

Returns:

  • string: OpenSearch Dashboards HTTP endpoint URL (e.g., "http://localhost:32801")
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestOpenSearchDashboardsIntegration(t *testing.T) {
    ctx := context.Background()

    // First, start OpenSearch
    opensearchURL, cleanupOS, err := SetupOpenSearch(ctx, t, nil)
    require.NoError(t, err)
    defer cleanupOS()

    // Then, start OpenSearch Dashboards
    config := DefaultOpenSearchDashboardsConfig(opensearchURL)
    dashboardsURL, cleanup, err := SetupOpenSearchDashboards(ctx, t, &config)
    require.NoError(t, err)
    defer cleanup()

    // Open Dashboards UI in browser or test via API
    resp, err := http.Get(dashboardsURL + "/api/status")
    require.NoError(t, err)
    defer resp.Body.Close()
}

OpenSearch Dashboards Features:

Visualization and management interface:
- Discover: Explore and search data
- Visualize: Create charts, graphs, and visualizations
- Dashboards: Combine visualizations into dashboards
- Dev Tools: Console for running queries
- Management: Index patterns, saved objects, settings
- Alerting: Create and manage alerts
- Reports: Generate and schedule reports
- Notebooks: Interactive analysis notebooks

UI Endpoints:

Key UI paths available:
- GET  / - Home page
- GET  /app/home - Application home
- GET  /app/discover - Data discovery
- GET  /app/dashboards - Dashboard viewer
- GET  /app/visualize - Visualization editor
- GET  /app/dev_tools - Developer tools console
- GET  /api/status - Health status API

API Endpoints:

REST API for automation:
- GET    /api/status - Application status
- GET    /api/saved_objects - List saved objects
- POST   /api/saved_objects/{type} - Create saved object
- PUT    /api/saved_objects/{type}/{id} - Update saved object
- DELETE /api/saved_objects/{type}/{id} - Delete saved object

Security:

For testing, security is disabled by default:
- No authentication required
- No TLS/SSL
- Open access to all features

For production, enable security plugin:
- Authentication (basic, SAML, OIDC, etc.)
- TLS/SSL encryption
- Role-based access control (RBAC)
- Multi-tenancy support

Connection to OpenSearch:

OpenSearch Dashboards requires a running OpenSearch instance.
Configure the connection via OPENSEARCH_HOSTS environment variable.

The config.OpenSearchURL should point to the OpenSearch REST API:
- http://localhost:9200 (from host)
- http://opensearch:9200 (from Docker network)

Performance:

OpenSearch Dashboards container starts in 30-60 seconds typically.
The wait strategy ensures the UI is fully initialized and ready
to accept requests before returning.

Cleanup:

Always defer the cleanup function to ensure the container is terminated:
defer cleanup()

The cleanup function is safe to call even if setup fails (it's a no-op).

Error Handling:

If container creation fails, the test should fail with require.NoError(t, err).
Common errors:
- Docker daemon not running
- Image pull failures (network issues)
- Port conflicts (rare with random ports)
- OpenSearch not accessible (check URL)
- Connection timeout (increase StartupTimeout)

func SetupOpenSearchWithDashboards

func SetupOpenSearchWithDashboards(ctx context.Context, t *testing.T, opensearchConfig *OpenSearchConfig, dashboardsConfig *OpenSearchDashboardsConfig) (string, string, ContainerCleanup, error)

SetupOpenSearchWithDashboards creates both OpenSearch and OpenSearch Dashboards containers.

This is a convenience function that combines SetupOpenSearch with SetupOpenSearchDashboards. Useful for tests that need a complete search stack.

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • opensearchConfig: Optional OpenSearch configuration (uses defaults if nil)
  • dashboardsConfig: Optional Dashboards configuration (OpenSearchURL will be set automatically)

Returns:

  • string: OpenSearch HTTP endpoint URL
  • string: OpenSearch Dashboards HTTP endpoint URL
  • ContainerCleanup: Function to terminate both containers
  • error: Container creation or startup errors

Example Usage:

func TestFullStack(t *testing.T) {
    ctx := context.Background()
    opensearchURL, dashboardsURL, cleanup, err := SetupOpenSearchWithDashboards(
        ctx, t, nil, nil)
    require.NoError(t, err)
    defer cleanup()

    // Both OpenSearch and Dashboards are ready to use
    // Test via API or UI
}

Cleanup:

The returned cleanup function will terminate both containers.
Always defer the cleanup function to ensure proper cleanup.

Use Cases:

  • End-to-end testing with complete stack
  • UI automation testing
  • Integration tests requiring visualization
  • Testing saved objects and dashboards

func SetupOpenSearchWithIndex

func SetupOpenSearchWithIndex(ctx context.Context, t *testing.T, config *OpenSearchConfig, indexName string) (string, string, ContainerCleanup, error)

SetupOpenSearchWithIndex creates an OpenSearch container and creates a test index.

This is a convenience function that combines SetupOpenSearch with index creation. Useful for tests that need a ready-to-use index.

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional OpenSearch configuration (uses defaults if nil)
  • indexName: Name of the index to create

Returns:

  • string: OpenSearch HTTP endpoint URL
  • string: Index name (same as input for convenience)
  • ContainerCleanup: Function to terminate the container
  • error: Container creation, startup, or index creation errors

Example Usage:

func TestWithIndex(t *testing.T) {
    ctx := context.Background()
    opensearchURL, indexName, cleanup, err := SetupOpenSearchWithIndex(
        ctx, t, nil, "test-index")
    require.NoError(t, err)
    defer cleanup()

    // Index "test-index" is ready to use
    // Index documents, search, etc.
}

Index Creation:

The index is created via OpenSearch REST API:
PUT /{indexName}
Content-Type: application/json

Note: Index creation requires HTTP calls to OpenSearch REST API.
For now, we return the connection URL pattern.
The calling test should create the index using the OpenSearch API.

Use Cases:

  • Testing with pre-configured index
  • Multi-index testing
  • Testing index-specific features
  • Isolating test data in separate indices

func SetupPostgres

func SetupPostgres(ctx context.Context, t *testing.T, config *PostgresConfig) (string, ContainerCleanup, error)

SetupPostgres creates a PostgreSQL container for integration testing.

PostgreSQL is a powerful, open-source relational database. This function starts a PostgreSQL container using testcontainers-go and returns the connection string and a cleanup function.

Container Configuration:

  • Image: postgres:17 (official PostgreSQL image)
  • Port: 5432/tcp (PostgreSQL default port)
  • Authentication: SCRAM-SHA-256 (PostgreSQL 14+ default)
  • Credentials: Configurable via PostgresConfig
  • Wait Strategy: Database readiness check with pg_isready

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional PostgreSQL configuration (uses defaults if nil)

Returns:

  • string: PostgreSQL connection string (e.g., "postgres://postgres:postgres@localhost:32771/postgres")
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestPostgresIntegration(t *testing.T) {
    ctx := context.Background()
    connStr, cleanup, err := SetupPostgres(ctx, t, nil)
    require.NoError(t, err)
    defer cleanup()

    // Connect to PostgreSQL using lib/pq or pgx
    db, err := sql.Open("postgres", connStr)
    require.NoError(t, err)
    defer db.Close()

    // Use database for testing
    err = db.Ping()
    require.NoError(t, err)
}

Connection Drivers:

Popular Go PostgreSQL drivers:
- github.com/lib/pq - Pure Go driver (stable)
- github.com/jackc/pgx/v5 - Native Go driver (feature-rich)
- database/sql - Standard library interface

Connection string format:
postgres://username:password@host:port/database?sslmode=disable

Database Features:

The container is configured with:
- SCRAM-SHA-256 authentication (secure password hashing)
- Default database created and ready to use
- Full PostgreSQL 17 feature set
- Transaction support, ACID compliance
- Rich SQL support with extensions

Performance:

PostgreSQL container starts quickly (typically 3-5 seconds).
The wait strategy ensures the database is fully initialized and
ready to accept connections before returning.

SSL Configuration:

For testing, SSL is typically disabled (sslmode=disable).
The returned connection string includes this parameter.
For production deployments, enable SSL verification.

Cleanup:

Always defer the cleanup function to ensure the container is terminated:
defer cleanup()

The cleanup function is safe to call even if setup fails (it's a no-op).

Data Persistence:

Test containers are ephemeral - data is lost when the container stops.
This is intentional for test isolation. Each test gets a clean database.

Error Handling:

If container creation fails, the test should fail with require.NoError(t, err).
Common errors:
- Docker daemon not running
- Image pull failures (network issues)
- Port conflicts (rare with random ports)

func SetupPostgresWithDatabase

func SetupPostgresWithDatabase(ctx context.Context, t *testing.T, config *PostgresConfig, databaseName string) (string, string, ContainerCleanup, error)

SetupPostgresWithDatabase creates a PostgreSQL container and creates an additional test database.

This is a convenience function that combines SetupPostgres with database creation. Useful for tests that need multiple databases or a specific database name.

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional PostgreSQL configuration (uses defaults if nil)
  • databaseName: Name of the additional database to create

Returns:

  • string: PostgreSQL connection string to the new database
  • string: Database name (same as input for convenience)
  • ContainerCleanup: Function to terminate the container
  • error: Container creation, startup, or database creation errors

Example Usage:

func TestWithCustomDatabase(t *testing.T) {
    ctx := context.Background()
    connStr, dbName, cleanup, err := SetupPostgresWithDatabase(ctx, t, nil, "testdb")
    require.NoError(t, err)
    defer cleanup()

    // Connect to the custom database
    db, err := sql.Open("postgres", connStr)
    require.NoError(t, err)
    defer db.Close()

    // Database "testdb" is ready to use
}

Database Creation:

The additional database is created via SQL: CREATE DATABASE {databaseName}
The returned connection string points to this new database.
The default database (postgres) still exists and can be used.

Use Cases:

  • Multi-tenant testing (separate database per tenant)
  • Testing database migrations
  • Testing cross-database queries
  • Isolating test data in separate databases

func SetupRDF4J

func SetupRDF4J(ctx context.Context, t *testing.T, config *RDF4JConfig) (string, ContainerCleanup, error)

SetupRDF4J creates an RDF4J container for integration testing.

RDF4J is an open-source framework for working with RDF data. This function starts an RDF4J Workbench container using testcontainers-go and returns the connection URL and a cleanup function.

Container Configuration:

  • Image: eclipse/rdf4j-workbench:5.2.0-jetty (RDF framework)
  • Port: 8080/tcp (HTTP REST API and Workbench UI)
  • Memory: Configurable via JavaOpts (default: 1GB min, 2GB max)
  • Wait Strategy: HTTP GET / returning 200 OK

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional RDF4J configuration (uses defaults if nil)

Returns:

  • string: RDF4J HTTP endpoint URL (e.g., "http://localhost:32781")
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestRDF4JIntegration(t *testing.T) {
    ctx := context.Background()
    rdf4jURL, cleanup, err := SetupRDF4J(ctx, t, nil)
    require.NoError(t, err)
    defer cleanup()

    // Access RDF4J Workbench UI
    resp, err := http.Get(rdf4jURL + "/rdf4j-workbench")
    require.NoError(t, err)
    defer resp.Body.Close()

    // RDF4J is ready for RDF/SPARQL operations
}

RDF4J Features:

Open-source RDF framework with:
- RDF storage and retrieval
- SPARQL 1.1 Query and Update
- REST API for repository management
- Workbench UI for visual management
- Support for various RDF formats
- Repository federation
- Transaction support
- Inference and reasoning

REST API Endpoints:

Key endpoints available:
- GET  /rdf4j-server/repositories - List repositories
- POST /rdf4j-server/repositories - Create repository
- GET  /rdf4j-server/repositories/{id} - Repository info
- GET  /rdf4j-server/repositories/{id}/statements - Query triples
- POST /rdf4j-server/repositories/{id}/statements - Add triples
- POST /rdf4j-server/repositories/{id} - SPARQL query endpoint

Workbench UI:

Access the RDF4J Workbench web interface:
URL: {rdf4jURL}/rdf4j-workbench
Default: No authentication required for test container

Features:
- Visual SPARQL query editor
- Repository management
- Import/export data
- Explore RDF graphs
- Namespace management
- Query history

Memory Configuration:

RDF4J is a Java application requiring JVM memory tuning:
- Default: -Xms1g -Xmx2g (1GB min, 2GB max)
- For large datasets: -Xms2g -Xmx4g
- Adjust via config.JavaOpts

Performance:

RDF4J container starts in 15-30 seconds typically.
The wait strategy ensures the web interface is fully initialized and
ready to accept requests before returning.

Data Formats:

RDF4J supports various RDF serialization formats:
- Turtle (.ttl)
- RDF/XML (.rdf)
- N-Triples (.nt)
- N-Quads (.nq)
- JSON-LD (.jsonld)
- TriG (.trig)
- TriX (.trix)
- Binary RDF

SPARQL Support:

Full SPARQL 1.1 support including:
- SELECT, CONSTRUCT, ASK, DESCRIBE queries
- INSERT, DELETE, LOAD, CLEAR updates
- FILTER, OPTIONAL, UNION operators
- Aggregation functions (COUNT, SUM, AVG, etc.)
- Subqueries and property paths
- Named graphs and GRAPH keyword
- Federation (SERVICE keyword)

Repository Types:

RDF4J supports various repository types:
- Memory Store (in-memory, fast)
- Native Store (persistent, disk-based)
- SPARQL Repository (federation)
- HTTP Repository (remote access)
- Sail Stack (custom configurations)

Cleanup:

Always defer the cleanup function to ensure the container is terminated:
defer cleanup()

The cleanup function is safe to call even if setup fails (it's a no-op).

Data Persistence:

Test containers are ephemeral - data is lost when the container stops.
This is intentional for test isolation. Each test gets a clean database.

Error Handling:

If container creation fails, the test should fail with require.NoError(t, err).
Common errors:
- Docker daemon not running
- Image pull failures (network issues)
- Port conflicts (rare with random ports)
- Insufficient memory for RDF4J (requires ~1GB minimum)

func SetupRDF4JWithRepository

func SetupRDF4JWithRepository(ctx context.Context, t *testing.T, config *RDF4JConfig, repositoryID, repositoryTitle string) (string, string, ContainerCleanup, error)

SetupRDF4JWithRepository creates an RDF4J container and creates a test repository.

This is a convenience function that combines SetupRDF4J with repository creation. Useful for tests that need a ready-to-use repository.

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional RDF4J configuration (uses defaults if nil)
  • repositoryID: ID of the repository to create
  • repositoryTitle: Human-readable title for the repository

Returns:

  • string: RDF4J HTTP endpoint URL
  • string: Repository ID (same as input for convenience)
  • ContainerCleanup: Function to terminate the container
  • error: Container creation, startup, or repository creation errors

Example Usage:

func TestWithRepository(t *testing.T) {
    ctx := context.Background()
    rdf4jURL, repoID, cleanup, err := SetupRDF4JWithRepository(
        ctx, t, nil, "test-repo", "Test Repository")
    require.NoError(t, err)
    defer cleanup()

    // Repository "test-repo" is ready to use
    sparqlEndpoint := fmt.Sprintf("%s/rdf4j-server/repositories/%s", rdf4jURL, repoID)
}

Repository Creation:

The repository is created via RDF4J REST API:
POST /rdf4j-server/repositories/SYSTEM/statements
Content-Type: text/turtle

Note: Repository creation requires HTTP calls to RDF4J REST API.
For now, we return the connection URL pattern.
The calling test should create the repository using the RDF4J API.

Repository Configuration:

RDF4J supports various repository configurations:
- Native Store (persistent on disk)
- Memory Store (in-memory, fast)
- SPARQL Repository (federation)
- HTTP Repository (remote)

Use Cases:

  • Testing with pre-configured repository
  • Multi-repository testing
  • Testing repository-specific features
  • Isolating test data in separate repositories

func SetupRabbitMQ

func SetupRabbitMQ(ctx context.Context, t *testing.T, config *RabbitMQConfig) (string, string, ContainerCleanup, error)

SetupRabbitMQ creates a RabbitMQ container for integration testing.

RabbitMQ is a message broker that implements AMQP protocol. This function starts a RabbitMQ container using testcontainers-go and returns the connection URL and a cleanup function.

Container Configuration:

  • Image: rabbitmq:4.1.0-management (includes management UI)
  • Port: 5672/tcp (AMQP protocol)
  • Management UI: 15672/tcp (HTTP)
  • Credentials: Configurable via RabbitMQConfig
  • Wait Strategy: Server readiness check on port 5672

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional RabbitMQ configuration (uses defaults if nil)

Returns:

  • string: RabbitMQ AMQP connection URL (e.g., "amqp://guest:guest@localhost:32772/")
  • string: RabbitMQ Management UI URL (e.g., "http://localhost:32773")
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestRabbitMQIntegration(t *testing.T) {
    ctx := context.Background()
    amqpURL, managementURL, cleanup, err := SetupRabbitMQ(ctx, t, nil)
    require.NoError(t, err)
    defer cleanup()

    // Connect to RabbitMQ using AMQP client
    conn, err := amqp.Dial(amqpURL)
    require.NoError(t, err)
    defer conn.Close()

    // Open management UI in browser for debugging
    // Open: managementURL (username: guest, password: guest)
}

AMQP Clients:

Popular Go RabbitMQ/AMQP clients:
- github.com/rabbitmq/amqp091-go - Official RabbitMQ client
- github.com/streadway/amqp - Popular legacy client (archived)

Connection URL format:
amqp://username:password@host:port/vhost
amqps://username:password@host:port/vhost (with TLS)

Management UI:

The management plugin provides a web-based UI for:
- Monitoring queues, exchanges, and connections
- Managing users and virtual hosts
- Viewing message rates and performance metrics
- Creating and binding queues/exchanges

Access: http://localhost:{port}
Default credentials: guest/guest

RabbitMQ Features:

  • Message queuing with AMQP protocol
  • Message persistence and durability
  • Flexible routing with exchanges
  • Dead letter queues
  • Message TTL and expiration
  • Priority queues
  • Publisher confirms
  • Consumer acknowledgments

Performance:

RabbitMQ container starts in 10-20 seconds typically.
The wait strategy ensures the broker is fully initialized and
ready to accept connections before returning.

Cleanup:

Always defer the cleanup function to ensure the container is terminated:
defer cleanup()

The cleanup function is safe to call even if setup fails (it's a no-op).

Virtual Hosts:

Default virtual host is "/" which is included in the connection URL.
For custom vhosts, modify the URL:
amqp://guest:guest@localhost:5672/custom-vhost

Error Handling:

If container creation fails, the test should fail with require.NoError(t, err).
Common errors:
- Docker daemon not running
- Image pull failures (network issues)
- Port conflicts (rare with random ports)
- Insufficient memory for RabbitMQ (requires ~400MB)

func SetupRabbitMQWithVHost

func SetupRabbitMQWithVHost(ctx context.Context, t *testing.T, config *RabbitMQConfig, vhost string) (string, string, ContainerCleanup, error)

SetupRabbitMQWithVHost creates a RabbitMQ container and creates an additional virtual host.

Virtual hosts provide logical separation in RabbitMQ, similar to databases in PostgreSQL. Each vhost has its own queues, exchanges, and permissions.

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional RabbitMQ configuration (uses defaults if nil)
  • vhost: Name of the virtual host to create

Returns:

  • string: RabbitMQ AMQP connection URL to the new vhost
  • string: RabbitMQ Management UI URL
  • ContainerCleanup: Function to terminate the container
  • error: Container creation, startup, or vhost creation errors

Example Usage:

func TestWithCustomVHost(t *testing.T) {
    ctx := context.Background()
    amqpURL, managementURL, cleanup, err := SetupRabbitMQWithVHost(ctx, t, nil, "test-vhost")
    require.NoError(t, err)
    defer cleanup()

    // Connect to the custom vhost
    conn, err := amqp.Dial(amqpURL)
    require.NoError(t, err)
    defer conn.Close()

    // Virtual host "test-vhost" is ready to use
}

Virtual Host Management:

The vhost is created via RabbitMQ Management API:
PUT /api/vhosts/{vhost}

Note: Vhost creation requires HTTP calls to Management API.
For now, we return the connection URL pattern.
The calling test can create the vhost using rabbitmq-management-go client.

Use Cases:

  • Multi-tenant testing (separate vhost per tenant)
  • Testing cross-vhost scenarios
  • Isolating test data in separate vhosts
  • Testing vhost permissions and quotas

func SetupRegistry

func SetupRegistry(ctx context.Context, t *testing.T, config *RegistryConfig) (string, ContainerCleanup, error)

SetupRegistry creates a Docker Registry container for integration testing.

Docker Registry is the open-source server-side application that stores and distributes Docker images. This function starts a Registry container using testcontainers-go and returns the connection URL and a cleanup function.

Container Configuration:

  • Image: registry:3 (official Docker Registry)
  • Port: 5000/tcp (HTTP API)
  • Wait Strategy: HTTP GET /v2/ returning 200 OK

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional Registry configuration (uses defaults if nil)

Returns:

  • string: Docker Registry HTTP endpoint URL (e.g., "http://localhost:32790")
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestDockerRegistryIntegration(t *testing.T) {
    ctx := context.Background()
    registryURL, cleanup, err := SetupRegistry(ctx, t, nil)
    require.NoError(t, err)
    defer cleanup()

    // Use Docker Registry API
    resp, err := http.Get(registryURL + "/v2/_catalog")
    require.NoError(t, err)
    defer resp.Body.Close()

    // Registry is ready for pushing/pulling images
}

Docker Registry Features:

Open-source registry implementation:
- Store and distribute Docker images
- Docker Registry HTTP API V2
- Content addressable storage
- Image manifest management
- Layer deduplication
- Garbage collection
- Webhook notifications
- Token-based authentication support

HTTP API V2 Endpoints:

Key endpoints available:
- GET  /v2/ - Check API version (returns {})
- GET  /v2/_catalog - List repositories
- GET  /v2/{name}/tags/list - List tags for repository
- GET  /v2/{name}/manifests/{reference} - Get image manifest
- PUT  /v2/{name}/manifests/{reference} - Push image manifest
- GET  /v2/{name}/blobs/{digest} - Get image layer
- PUT  /v2/{name}/blobs/uploads/ - Upload image layer
- DELETE /v2/{name}/manifests/{reference} - Delete image

Image Operations:

Pushing images to the test registry:
1. Tag image: docker tag myimage localhost:{port}/myimage:tag
2. Push image: docker push localhost:{port}/myimage:tag

Pulling images from the test registry:
docker pull localhost:{port}/myimage:tag

Storage:

The registry stores images in /var/lib/registry inside the container.
For testing, this is ephemeral (lost when container stops).
This ensures test isolation.

Performance:

Docker Registry container starts very quickly (typically 2-5 seconds).
The wait strategy ensures the HTTP API is ready before returning.

Authentication:

The test registry runs without authentication (open access).
For production deployments, enable authentication via:
- Basic authentication
- Token-based authentication
- External authentication service

Content Types:

Registry supports various manifest formats:
- Docker Image Manifest V2 Schema 1
- Docker Image Manifest V2 Schema 2
- OCI Image Manifest
- Docker Manifest List (multi-arch)
- OCI Image Index

Cleanup:

Always defer the cleanup function to ensure the container is terminated:
defer cleanup()

The cleanup function is safe to call even if setup fails (it's a no-op).

Data Persistence:

Test containers are ephemeral - images are lost when the container stops.
This is intentional for test isolation. Each test gets a clean registry.

Error Handling:

If container creation fails, the test should fail with require.NoError(t, err).
Common errors:
- Docker daemon not running
- Image pull failures (network issues)
- Port conflicts (rare with random ports)

Use Cases:

Integration testing scenarios:
- Testing Docker image push/pull workflows
- Testing container orchestration systems
- Testing CI/CD pipelines
- Testing image scanning and vulnerability detection
- Testing registry mirroring and replication
- Testing registry garbage collection

func SetupRegistryWithAuth

func SetupRegistryWithAuth(ctx context.Context, t *testing.T, config *RegistryConfig, username, password string) (string, string, string, ContainerCleanup, error)

SetupRegistryWithAuth creates a Docker Registry container with basic authentication.

This sets up a registry with htpasswd-based authentication for testing secure scenarios.

Parameters:

  • ctx: Context for container operations
  • t: Testing context for requirement checks
  • config: Optional Registry configuration (uses defaults if nil)
  • username: Username for basic authentication
  • password: Password for basic authentication

Returns:

  • string: Docker Registry HTTP endpoint URL
  • string: Username (same as input for convenience)
  • string: Password (same as input for convenience)
  • ContainerCleanup: Function to terminate the container
  • error: Container creation or startup errors

Example Usage:

func TestRegistryWithAuth(t *testing.T) {
    ctx := context.Background()
    registryURL, user, pass, cleanup, err := SetupRegistryWithAuth(
        ctx, t, nil, "testuser", "testpass")
    require.NoError(t, err)
    defer cleanup()

    // Use authenticated registry
    // Docker login: docker login localhost:{port} -u testuser -p testpass
}

Authentication Setup:

Basic authentication requires:
1. htpasswd file with user credentials
2. Registry configuration to enable auth

Note: Full auth setup requires creating htpasswd file.
For now, we return connection details for manual setup.
The calling test should configure authentication as needed.

Docker Login:

To authenticate with the registry:
docker login {registryURL} -u {username} -p {password}

Use Cases:

  • Testing authenticated image push/pull
  • Testing credential management
  • Testing registry access control
  • Testing CI/CD with private registries

type CouchDBConfig

type CouchDBConfig struct {
	// Image is the Docker image to use (default: "couchdb:3")
	Image string
	// AdminUsername is the CouchDB admin username (default: "admin")
	AdminUsername string
	// AdminPassword is the CouchDB admin password (default: "admin")
	AdminPassword string
	// StartupTimeout is the maximum time to wait for CouchDB to be ready (default: 60s)
	StartupTimeout time.Duration
}

CouchDBConfig holds configuration for CouchDB testcontainer setup.

func DefaultCouchDBConfig

func DefaultCouchDBConfig() CouchDBConfig

DefaultCouchDBConfig returns the default CouchDB configuration for testing.

type DockerStatsExporterConfig

type DockerStatsExporterConfig struct {
	// Image is the Docker image to use (default: "ghcr.io/grzegorzmika/docker_stats_exporter:latest")
	Image string
	// StartupTimeout is the maximum time to wait for Docker Stats Exporter to be ready (default: 60s)
	StartupTimeout time.Duration
}

DockerStatsExporterConfig holds configuration for Docker Stats Exporter testcontainer setup.

func DefaultDockerStatsExporterConfig

func DefaultDockerStatsExporterConfig() DockerStatsExporterConfig

DefaultDockerStatsExporterConfig returns the default Docker Stats Exporter configuration for testing.

type DragonflyDBConfig

type DragonflyDBConfig struct {
	// Image is the Docker image to use (default: "docker.dragonflydb.io/dragonflydb/dragonfly:v1.34.1")
	Image string
	// StartupTimeout is the maximum time to wait for DragonflyDB to be ready (default: 30s)
	StartupTimeout time.Duration
	// Password is the optional password for DragonflyDB authentication (empty = no auth)
	Password string
}

DragonflyDBConfig holds configuration for DragonflyDB testcontainer setup.

func DefaultDragonflyDBConfig

func DefaultDragonflyDBConfig() DragonflyDBConfig

DefaultDragonflyDBConfig returns the default DragonflyDB configuration for testing.

type FluentBitConfig

type FluentBitConfig struct {
	// Image is the Docker image to use (default: "fluent/fluent-bit:4.0.13-amd64")
	Image string
	// StartupTimeout is the maximum time to wait for Fluent Bit to be ready (default: 60s)
	StartupTimeout time.Duration
}

FluentBitConfig holds configuration for Fluent Bit testcontainer setup.

func DefaultFluentBitConfig

func DefaultFluentBitConfig() FluentBitConfig

DefaultFluentBitConfig returns the default Fluent Bit configuration for testing.

type GrafanaConfig

type GrafanaConfig struct {
	// Image is the Docker image to use (default: "grafana/grafana:12.3.0-18893060694")
	Image string
	// AdminUser is the admin username (default: "admin")
	AdminUser string
	// AdminPassword is the admin password (default: "admin")
	AdminPassword string
	// StartupTimeout is the maximum time to wait for Grafana to be ready (default: 60s)
	StartupTimeout time.Duration
}

GrafanaConfig holds configuration for Grafana testcontainer setup.

func DefaultGrafanaConfig

func DefaultGrafanaConfig() GrafanaConfig

DefaultGrafanaConfig returns the default Grafana configuration for testing.

type GraphDBConfig

type GraphDBConfig struct {
	// Image is the Docker image to use (default: "ontotext/graphdb:10.8.1")
	Image string
	// JavaOpts are JVM options for memory configuration (default: "-Xms1g -Xmx2g")
	JavaOpts string
	// StartupTimeout is the maximum time to wait for GraphDB to be ready (default: 120s)
	StartupTimeout time.Duration
}

GraphDBConfig holds configuration for GraphDB testcontainer setup.

func DefaultGraphDBConfig

func DefaultGraphDBConfig() GraphDBConfig

DefaultGraphDBConfig returns the default GraphDB configuration for testing.

type LakeFSConfig

type LakeFSConfig struct {
	// Image is the Docker image to use (default: "treeverse/lakefs:1.70")
	Image string
	// StartupTimeout is the maximum time to wait for LakeFS to be ready (default: 120s)
	StartupTimeout time.Duration
}

LakeFSConfig holds configuration for LakeFS testcontainer setup.

func DefaultLakeFSConfig

func DefaultLakeFSConfig() LakeFSConfig

DefaultLakeFSConfig returns the default LakeFS configuration for testing.

type MimirConfig

type MimirConfig struct {
	// Image is the Docker image to use (default: "grafana/mimir:2.17.2")
	Image string
	// StartupTimeout is the maximum time to wait for Mimir to be ready (default: 120s)
	StartupTimeout time.Duration
}

MimirConfig holds configuration for Grafana Mimir testcontainer setup.

func DefaultMimirConfig

func DefaultMimirConfig() MimirConfig

DefaultMimirConfig returns the default Grafana Mimir configuration for testing.

type OTelCollectorConfig

type OTelCollectorConfig struct {
	// Image is the Docker image to use (default: "otel/opentelemetry-collector:nightly")
	Image string
	// StartupTimeout is the maximum time to wait for OTel Collector to be ready (default: 60s)
	StartupTimeout time.Duration
}

OTelCollectorConfig holds configuration for OpenTelemetry Collector testcontainer setup.

func DefaultOTelCollectorConfig

func DefaultOTelCollectorConfig() OTelCollectorConfig

DefaultOTelCollectorConfig returns the default OpenTelemetry Collector configuration for testing.

type OpenSearchConfig

type OpenSearchConfig struct {
	// Image is the Docker image to use (default: "opensearchproject/opensearch:3.0.0")
	Image string
	// JavaOpts are JVM options for memory configuration (default: "-Xms512m -Xmx512m")
	JavaOpts string
	// DisableSecurity disables OpenSearch security plugin for testing (default: true)
	DisableSecurity bool
	// StartupTimeout is the maximum time to wait for OpenSearch to be ready (default: 120s)
	StartupTimeout time.Duration
}

OpenSearchConfig holds configuration for OpenSearch testcontainer setup.

func DefaultOpenSearchConfig

func DefaultOpenSearchConfig() OpenSearchConfig

DefaultOpenSearchConfig returns the default OpenSearch configuration for testing.

type OpenSearchDashboardsConfig

type OpenSearchDashboardsConfig struct {
	// Image is the Docker image to use (default: "opensearchproject/opensearch-dashboards:3.0.0")
	Image string
	// OpenSearchURL is the URL to the OpenSearch instance
	OpenSearchURL string
	// DisableSecurity disables OpenSearch Dashboards security for testing (default: true)
	DisableSecurity bool
	// StartupTimeout is the maximum time to wait for Dashboards to be ready (default: 120s)
	StartupTimeout time.Duration
}

OpenSearchDashboardsConfig holds configuration for OpenSearch Dashboards testcontainer setup.

func DefaultOpenSearchDashboardsConfig

func DefaultOpenSearchDashboardsConfig(opensearchURL string) OpenSearchDashboardsConfig

DefaultOpenSearchDashboardsConfig returns the default OpenSearch Dashboards configuration for testing.

type PostgresConfig

type PostgresConfig struct {
	// Image is the Docker image to use (default: "postgres:17")
	Image string
	// Username is the PostgreSQL superuser username (default: "postgres")
	Username string
	// Password is the PostgreSQL superuser password (default: "postgres")
	Password string
	// Database is the default database to create (default: "postgres")
	Database string
	// StartupTimeout is the maximum time to wait for PostgreSQL to be ready (default: 60s)
	StartupTimeout time.Duration
}

PostgresConfig holds configuration for PostgreSQL testcontainer setup.

func DefaultPostgresConfig

func DefaultPostgresConfig() PostgresConfig

DefaultPostgresConfig returns the default PostgreSQL configuration for testing.

type RDF4JConfig

type RDF4JConfig struct {
	// Image is the Docker image to use (default: "eclipse/rdf4j-workbench:5.2.0-jetty")
	Image string
	// JavaOpts are JVM options for memory configuration (default: "-Xms1g -Xmx2g")
	JavaOpts string
	// StartupTimeout is the maximum time to wait for RDF4J to be ready (default: 120s)
	StartupTimeout time.Duration
}

RDF4JConfig holds configuration for RDF4J testcontainer setup.

func DefaultRDF4JConfig

func DefaultRDF4JConfig() RDF4JConfig

DefaultRDF4JConfig returns the default RDF4J configuration for testing.

type RabbitMQConfig

type RabbitMQConfig struct {
	// Image is the Docker image to use (default: "rabbitmq:4.1.0-management")
	Image string
	// Username is the RabbitMQ admin username (default: "guest")
	Username string
	// Password is the RabbitMQ admin password (default: "guest")
	Password string
	// StartupTimeout is the maximum time to wait for RabbitMQ to be ready (default: 60s)
	StartupTimeout time.Duration
}

RabbitMQConfig holds configuration for RabbitMQ testcontainer setup.

func DefaultRabbitMQConfig

func DefaultRabbitMQConfig() RabbitMQConfig

DefaultRabbitMQConfig returns the default RabbitMQ configuration for testing.

type RegistryConfig

type RegistryConfig struct {
	// Image is the Docker image to use (default: "registry:3")
	Image string
	// StartupTimeout is the maximum time to wait for Registry to be ready (default: 60s)
	StartupTimeout time.Duration
}

RegistryConfig holds configuration for Docker Registry testcontainer setup.

func DefaultRegistryConfig

func DefaultRegistryConfig() RegistryConfig

DefaultRegistryConfig returns the default Docker Registry configuration for testing.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL