OpenAI Agents Go SDK
The OpenAI Agents SDK is a lightweight yet powerful framework for building
multi-agent workflows. It is provider-agnostic, supporting the OpenAI Responses,
Chat Completions, and Realtime APIs, as well as other LLMs via custom providers.
This is a Go port of OpenAI Agents Python SDK
(see its license here).
This project aims at being as close as possible to the original Python
implementation, for both its behavior and the API.
Core concepts:
- Agents: LLMs configured with instructions, tools, guardrails, and handoffs
- Handoffs: A specialized tool call used by the Agents SDK for transferring control between agents
- Guardrails: Configurable safety checks for input and output validation
- Sessions: Automatic conversation history management across runs
- Tracing: Built-in tracking of agent runs for debugging and analytics
Explore the examples directory to see the SDK in action:
| Directory |
Description |
| basic |
Core features such as hello world, streaming, prompt templates, and tools. |
| agent_patterns |
Common agent design patterns including routing, guardrails, and parallelization. |
| customer_service |
Multi-agent airline support scenario using handoffs and tools. |
| financial_research_agent |
Coordinated agents performing financial analysis and report writing. |
| handoffs |
Techniques for filtering messages and handing off between agents. |
| hosted_mcp |
Hosted Model Context Protocol examples, including simple and approval flows. |
| mcp |
Running local MCP servers and clients for filesystems, git, prompts, and streaming. |
| model_providers |
Integrating custom model providers and proxies like LiteLLM. |
| research_bot |
General research bot combining planner, search, and writer agents. |
| repl |
Command-line REPL for interactive experimentation. |
| realtime |
Realtime voice workflows, including Twilio SIP integration. |
| session |
Demonstrates persistent session memory across multiple runs. |
| tools |
Usage of built-in tools such as code interpreter, computer use, file search, and web search. |
| voice |
Static and streaming voice response examples. |
Requirements
- Go
1.24+ (see go.mod)
OPENAI_API_KEY for OpenAI-backed models
Optional dependencies (only needed for specific features):
- SQLite sessions require CGO (via
github.com/mattn/go-sqlite3)
- Voice examples may require PortAudio on your system
- Computer-use tool relies on Playwright
Installation
go get github.com/denggeng/openai-agents-go-plus@latest
Hello world example
package main
import (
"context"
"fmt"
"github.com/denggeng/openai-agents-go-plus/agents"
)
func main() {
agent := agents.New("Assistant").
WithInstructions("You are a helpful assistant").
WithModel("gpt-4o")
result, err := agents.Run(context.Background(), agent, "Write a haiku about recursion in programming.")
if err != nil {
panic(err)
}
fmt.Println(result.FinalOutput)
// Function calls itself,
// Deep within the endless loop,
// Code mirrors its form.
}
(If running this, ensure you set the OPENAI_API_KEY environment variable)
Handoffs example
package main
import (
"context"
"fmt"
"github.com/denggeng/openai-agents-go-plus/agents"
)
func main() {
spanishAgent := agents.New("Spanish agent").
WithInstructions("You only speak Spanish.").
WithModel("gpt-4o")
englishAgent := agents.New("English agent").
WithInstructions("You only speak English.").
WithModel("gpt-4o")
triageAgent := agents.New("Triage agent").
WithInstructions("Handoff to the appropriate agent based on the language of the request.").
WithAgentHandoffs(spanishAgent, englishAgent).
WithModel("gpt-4o")
result, err := agents.Run(context.Background(), triageAgent, "Hola, ¿cómo estás?")
if err != nil {
panic(err)
}
fmt.Println(result.FinalOutput)
// ¡Hola! Estoy bien, gracias. ¿Y tú cómo estás?
}
Functions example
package main
import (
"context"
"fmt"
"github.com/denggeng/openai-agents-go-plus/agents"
)
// Tool params type
type GetWeatherParams struct {
City string `json:"city"`
}
// Tool implementation
func getWeather(_ context.Context, params GetWeatherParams) (string, error) {
return fmt.Sprintf("The weather in %s is sunny.", params.City), nil
}
// Tool registration (using SDK's NewFunctionTool)
var getWeatherTool = agents.NewFunctionTool("GetWeather", "", getWeather)
func main() {
agent := agents.New("Hello world").
WithInstructions("You are a helpful agent.").
WithModel("gpt-4o").
WithTools(getWeatherTool)
result, err := agents.Run(context.Background(), agent, "What's the weather in Tokyo?")
if err != nil {
panic(err)
}
fmt.Println(result.FinalOutput)
// The weather in Tokyo is sunny.
}
The agent loop
When you call agents.Run(), we run a loop until we get a final output.
- We call the LLM, using the model and settings on the agent, and the message history.
- The LLM returns a response, which may include tool calls.
- If the response has a final output (see below for more on this), we return it and end the loop.
- If the response has a handoff, we set the agent to the new agent and go back to step 1.
- We process the tool calls (if any) and append the tool responses messages. Then we go to step 1.
There is a MaxTurns parameter that you can use to limit the number of times the loop executes.
Streaming
You can stream semantic events as the run progresses:
result, err := agents.RunStreamed(context.Background(), agent, "Draft a short summary.")
if err != nil {
panic(err)
}
_ = result.StreamEvents(func(event agents.StreamEvent) error {
// handle RunItemStreamEvent / RawResponsesStreamEvent, etc.
return nil
})
For streaming cancellation, call result.Cancel() (immediate) or result.Cancel(agents.CancelModeAfterTurn) (finish the current turn, then stop).
Final output
Final output is the last thing the agent produces in the loop.
- If you set an
OutputType on the agent, the final output is when the LLM returns something of that type. We use structured outputs for this.
- If there's no
OutputType (i.e. plain text responses), then the first LLM response without any tool calls or handoffs is considered as the final output.
As a result, the mental model for the agent loop is:
- If the current agent has an
OutputType, the loop runs until the agent produces structured output matching that type.
- If the current agent does not have an
OutputType, the loop runs until the current agent produces a message without any tool calls/handoffs.
Sessions
Sessions let you persist conversation history across runs without manually passing the full input list each time.
package main
import (
"context"
"fmt"
"github.com/denggeng/openai-agents-go-plus/agents"
"github.com/denggeng/openai-agents-go-plus/memory"
)
func main() {
session, err := memory.NewSQLiteSession(context.Background(), memory.SQLiteSessionParams{
SessionID: "conversation_123",
DBDataSourceName: "conversations.db",
})
if err != nil {
panic(err)
}
defer session.Close()
agent := agents.New("Assistant").WithInstructions("Reply concisely.").WithModel("gpt-4o")
runner := agents.Runner{Config: agents.RunConfig{Session: session}}
// First turn
result1, _ := runner.Run(context.Background(), agent, "What city is the Golden Gate Bridge in?")
fmt.Println(result1.FinalOutput) // San Francisco
// Second turn (history automatically included)
result2, _ := runner.Run(context.Background(), agent, "What state is it in?")
fmt.Println(result2.FinalOutput) // California
}
Available session backends include SQLite, Redis, Dapr, Postgres, encrypted sessions, and advanced branching sessions (see memory/ and examples/session).
Tracing
Tracing is enabled by default and records agent, model, tool, and guardrail spans. You can disable it via RunConfig.TracingDisabled or set OPENAI_AGENTS_DISABLE_TRACING=1.
To export traces, register a processor/exporter via tracing.AddTraceProcessor or customize the default exporter in tracing/.
Human-in-the-loop (HITL) & long-running runs
Tool approval flows and resumable run state are supported. Use tool approval policies to pause runs, persist state, and resume later with approvals applied.
Realtime & Voice
- Realtime streaming workflows:
examples/realtime (Twilio SIP example included).
- Voice pipelines (STT/TTS + workflow):
examples/voice.
MCP (Model Context Protocol)
Local and hosted MCP integrations are available in examples/mcp and examples/hosted_mcp, covering filesystem, git, prompts, and streaming servers.
Custom model providers
You can plug in custom model providers or proxies (for example LiteLLM) via the model_providers examples and by implementing the agents.Model interface.
Development
go test ./...
gofmt -w ./agents ./openaitypes ./memory
Common agent patterns
The Agents SDK is designed to be highly flexible, allowing you to model a wide
range of LLM workflows including deterministic flows, iterative loops, and more.
See examples in examples/agent_patterns.
Authors
This project was started by Matteo Grella and Marco Nicola as a port of OpenAI's Agents SDK, aimed at supporting its adoption by Go developers and offering something potentially useful to the OpenAI team.
It has since evolved with community contributions, and we welcome new ideas, improvements, and pull requests from anyone interested in shaping its future.
Maintained by Geng Deng, focusing on syncing newer OpenAI Python SDK features into this Go port to keep parity.
Acknowledgments
We would like to thank the OpenAI team for creating the original OpenAI Agents Python SDK and the official OpenAI Go client library, which serve as the foundation for this Go implementation.
We also acknowledge Anthropic, PBC for creating and maintaining the Model Context Protocol, a crucial dependency for the MCP functionality in this framework, and particularly the MCP Go SDK.