aigw

command
v0.3.0-rc1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 15, 2025 License: Apache-2.0 Imports: 40 Imported by: 0

README

Envoy AI Gateway CLI (aigw)

Quick Start with Docker Compose

docker-compose.yml builds and runs aigw, targeting Ollama and listening for OpenAI chat completion requests on port 1975.

  1. Start Ollama on your host machine:

    OLLAMA_HOST=0.0.0.0 ollama serve
    
  2. Run the stack:

    # Start the stack (from this directory)
    docker compose up --wait -d
    
    # Send a test request
    docker compose run --rm openai-client
    
    # Stop everything
    docker compose down -v
    
OpenTelemetry Quick Start with Docker Compose

docker-compose-otel.yaml includes OpenTelemetry tracing, visualized with Arize Phoenix, an open-source LLM tracing and evaluation system. It has UX features for LLM spans formatted with OpenInference semantics.

  • aigw (port 1975): Envoy AI Gateway CLI (standalone mode) with OTEL tracing
  • Phoenix (port 6006): OpenTelemetry trace viewer UI for LLM observability
  • openai-client: OpenAI Python client instrumented with OpenTelemetry
  1. Start Ollama on your host machine:

    OLLAMA_HOST=0.0.0.0 ollama serve
    
  2. Run the stack with OpenTelemetry and Phoenix:

    # Start the stack with Phoenix (from this directory)
    docker compose -f docker-compose-otel.yaml up --wait -d
    
    # Send a test request
    docker compose -f docker-compose-otel.yaml run --build --rm openai-client
    
    # Verify traces are being sent
    docker compose -f docker-compose-otel.yaml logs phoenix | grep "POST /v1/traces"
    
    # View traces in Phoenix UI
    open http://localhost:6006
    
    # Stop everything
    docker compose -f docker-compose-otel.yaml down -v
    

Documentation

The Go Gopher

There is no documentation for this package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL