Inference Gateway SDK Examples
This directory contains examples demonstrating how to use the Inference Gateway SDK in various scenarios.
Available Examples
Shows how to list available models from different providers using the SDK.
Demonstrates how to list available MCP (Model Context Protocol) tools when the server has EXPOSE_MCP=true configured.
Demonstrates basic content generation with different LLM providers.
Illustrates how to use streaming mode to get content as it's generated.
Demonstrates advanced streaming with tool usage and agent-like interactions.
Shows how to implement function calling and use tools with compatible models.
Running the Examples
First you need to have an Inference Gateway instance running. You can use the Inference Gateway Docker image to run a local instance.
-
Copy the .env.example file to .env and set the Inference Gateway API URL:
cp .env.example .env
-
Run the Inference Gateway instance:
docker run --rm -it -p 8080:8080 --env-file .env ghcr.io/inference-gateway/inference-gateway:latest
-
Set the Inference Gateway API URL, to let the SDK examples know where to send requests:
export INFERENCE_GATEWAY_URL="http://localhost:8080/v1"
Each example directory contains a README.md with specific instructions, but the general pattern is:
-
Navigate to the example directory:
cd examples/<example-name>
-
Run the example:
go run main.go
Prerequisites
- Go 1.23 or later
- Access to an Inference Gateway instance (local or remote)
- Provider API keys configured in your Inference Gateway (for providers requiring authentication)
Notes
- Each example can be modified to use different providers and models by setting environment variables