docker-compose.yml builds and runs aigw, targeting
Ollama and listening for OpenAI chat completion requests on port 1975.
Start Ollama on your host machine:
OLLAMA_HOST=0.0.0.0 ollama serve
Run the stack:
# Start the stack (from this directory)
docker compose up --wait -d
# Send a test request
docker compose run --rm openai-client
# Stop everything
docker compose down -v
aigw (port 1975): Envoy AI Gateway CLI (standalone mode) with OTEL tracing
Phoenix (port 6006): OpenTelemetry trace viewer UI for LLM observability
openai-client: OpenAI Python client instrumented with OpenTelemetry
Start Ollama on your host machine:
OLLAMA_HOST=0.0.0.0 ollama serve
Run the stack with OpenTelemetry and Phoenix:
# Start the stack with Phoenix (from this directory)
docker compose -f docker-compose-otel.yaml up --wait -d
# Send a test request
docker compose -f docker-compose-otel.yaml run --build --rm openai-client
# Verify traces are being sent
docker compose -f docker-compose-otel.yaml logs phoenix | grep "POST /v1/traces"
# View traces in Phoenix UI
open http://localhost:6006
# Stop everything
docker compose -f docker-compose-otel.yaml down -v