Documentation
¶
Overview ¶
Package ptc (Programmatic Tool Calling) provides advanced tool execution capabilities for LangGraph Go agents.
This package implements a novel approach to tool calling where agents generate code to use tools programmatically, rather than using traditional function calling APIs. This enables more flexible, composable, and powerful tool usage patterns.
Core Concepts ¶
## Programmatic Tool Calling (PTC) Instead of the agent making individual tool calls through a structured API, PTC agents generate code that imports and uses tools directly. This approach offers several advantages:
- More natural tool composition in code
- Ability to use control flow (loops, conditionals) with tools
- Easier debugging and inspection
- No need for complex tool schemas
- Better performance for multi-tool operations
## Supported Languages
The package currently supports:
- Python (LanguagePython): Full Python runtime with standard library
- JavaScript (LanguageJavaScript): Node.js runtime execution
- Shell (LanguageShell): Bash shell command execution
Key Components ¶
## PTCAgent The main agent implementation that generates and executes tool-calling code:
import (
"github.com/smallnest/langgraphgo/ptc"
"github.com/tmc/langchaingo/llms"
"github.com/tmc/langchaingo/tools"
)
// Create agent with Python execution
agent, err := ptc.CreatePTCAgent(ptc.PTCAgentConfig{
Model: llm,
Tools: []tools.Tool{calculator, weatherTool},
Language: ptc.LanguagePython,
MaxIterations: 10,
})
## Execution Modes
Two execution modes are available:
- ModeDirect: Execute code in subprocess (default)
- ModeServer: Execute code via HTTP server (for sandboxing)
## PTCToolNode A graph node that handles the execution of generated code:
node := ptc.NewPTCToolNodeWithMode( ptc.LanguagePython, toolList, ptc.ModeDirect, )
Example Usage ¶
## Basic Agent
package main
import (
"context"
"fmt"
"github.com/smallnest/langgraphgo/ptc"
"github.com/tmc/langchaingo/llms/openai"
"github.com/tmc/langchaingo/tools"
)
func main() {
// Initialize LLM
llm, _ := openai.New()
// Create a calculator tool
calculator := &tools.CalculatorTool{}
// Create PTC agent
agent, err := ptc.CreatePTCAgent(ptc.PTCAgentConfig{
Model: llm,
Tools: []tools.Tool{calculator},
Language: ptc.LanguagePython,
})
if err != nil {
panic(err)
}
// Execute agent
ctx := context.Background()
result, err := agent.Invoke(ctx, map[string]any{
"messages": []llms.MessageContent{
{
Role: llms.ChatMessageTypeHuman,
Parts: []llms.ContentPart{
llms.TextPart("What is 123 * 456?"),
},
},
},
})
if err != nil {
panic(err)
}
fmt.Printf("Result: %v\n", result)
}
## Custom Tools
type WeatherTool struct{}
func (t *WeatherTool) Name() string { return "get_weather" }
func (t *WeatherTool) Description() string {
return "Get current weather for a city"
}
func (t *WeatherTool) Call(ctx context.Context, input string) (string, error) {
// Implementation
return "The weather in London is 15°C and sunny", nil
}
// Use with PTC agent
weather := &WeatherTool{}
agent, err := ptc.CreatePTCAgent(ptc.PTCAgentConfig{
Model: llm,
Tools: []tools.Tool{weather},
Language: ptc.LanguageJavaScript,
})
## Server Mode Execution
// Start tool server for sandboxed execution
server := ptc.NewToolServer(8080)
go server.Start()
defer server.Stop()
agent, err := ptc.CreatePTCAgent(ptc.PTCAgentConfig{
Model: llm,
Tools: toolList,
Language: ptc.LanguagePython,
ExecutionMode: ptc.ModeServer,
ServerURL: "http://localhost:8080",
})
Advanced Features ¶
## Code Generation The agent generates code like this:
```python
import json
# Tool imports are automatically added
from tools import calculator, weather
# User query: "Calculate 15% tip on $100 bill"
bill_amount = 100
tip_rate = 0.15
tip = calculator.multiply(bill_amount, tip_rate)
result = {
"bill_amount": bill_amount,
"tip_rate": tip_rate,
"tip_amount": tip,
"total": bill_amount + tip
}
print(json.dumps(result))
```
## Error Handling The system includes comprehensive error handling:
- Syntax errors in generated code
- Runtime errors during execution
- Tool execution failures
- Timeout protection
- Resource usage limits
Security Considerations ¶
- Use server mode for isolation
- Set appropriate timeouts
- Monitor resource usage
- Validate tool inputs/outputs
- Consider sandboxing for untrusted code
Performance ¶
- Code execution is generally faster than multiple tool calls
- Consider caching for repeated operations
- Monitor execution time for long-running operations
- Use streaming for real-time feedback
Integration with LangGraph ¶
The PTC agent integrates seamlessly with LangGraph:
g := graph.NewStateGraph()
// Add PTC node
ptcNode := ptc.NewPTCToolNode(ptc.LanguagePython, tools)
g.AddNode("tools", ptcNode.Invoke)
// Add LLM node for reasoning
g.AddNode("reason", llmNode)
// Define execution flow
g.SetEntry("reason")
g.AddEdge("reason", "tools")
g.AddConditionalEdge("tools", shouldContinue, "continue", "end")
// Compile and run
runnable := g.Compile()
result, _ := runnable.Invoke(ctx, initialState)
Best Practices ¶
- Choose appropriate execution language based on your tools
- Use ModeServer for production environments
- Set reasonable iteration limits
- Provide clear tool descriptions
- Handle errors gracefully in your tools
- Test with various input patterns
- Monitor execution for resource usage
- Use timeouts for long-running operations
Limitations ¶
- Requires runtime environment for chosen language
- Generated code might have bugs
- Debugging generated code can be challenging
- Security risks with unrestricted code execution
Index ¶
- func BuildSystemPrompt(userPrompt string, language ExecutionLanguage, executor *CodeExecutor) string
- func ContainsCode(msg llms.MessageContent) bool
- func CreatePTCAgent(config PTCAgentConfig) (*graph.Runnable, error)
- type CodeExecutor
- func (ce *CodeExecutor) Execute(ctx context.Context, code string) (*ExecutionResult, error)
- func (ce *CodeExecutor) GetToolDefinitions() string
- func (ce *CodeExecutor) GetToolServerURL() string
- func (ce *CodeExecutor) Start(ctx context.Context) error
- func (ce *CodeExecutor) Stop(ctx context.Context) error
- type ExecutionLanguage
- type ExecutionMode
- type ExecutionResult
- type PTCAgentConfig
- type PTCToolNode
- type ToolRequest
- type ToolResponse
- type ToolServer
Examples ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func BuildSystemPrompt ¶ added in v0.6.0
func BuildSystemPrompt(userPrompt string, language ExecutionLanguage, executor *CodeExecutor) string
buildSystemPrompt builds the system prompt with tool definitions BuildSystemPrompt builds the system prompt with tool definitions
func ContainsCode ¶ added in v0.6.0
func ContainsCode(msg llms.MessageContent) bool
containsCode checks if a message contains code to execute ContainsCode checks if a message contains code to execute
func CreatePTCAgent ¶
func CreatePTCAgent(config PTCAgentConfig) (*graph.Runnable, error)
CreatePTCAgent creates a new agent that uses programmatic tool calling This agent generates code to call tools programmatically rather than using traditional tool calling with round-trips
Example ¶
package main
import (
"context"
"fmt"
"github.com/tmc/langchaingo/tools"
)
// MockTool for testing
type MockTool struct {
name string
description string
response string
}
func (t MockTool) Name() string {
return t.name
}
func (t MockTool) Description() string {
return t.description
}
func (t MockTool) Call(ctx context.Context, input string) (string, error) {
return t.response, nil
}
func main() {
// This example shows how to create a PTC agent
// Note: This requires a real LLM and won't run in tests
// Create tools
_ = []tools.Tool{
MockTool{
name: "calculator",
description: "Performs arithmetic calculations",
response: "42",
},
}
// In real usage, you would use:
// model, _ := openai.New()
// agent, _ := ptc.CreatePTCAgent(ptc.PTCAgentConfig{
// Model: model,
// Tools: tools,
// Language: ptc.LanguagePython,
// MaxIterations: 10,
// })
// result, _ := agent.Invoke(context.Background(), initialState)
fmt.Println("PTC Agent created successfully")
}
Output: PTC Agent created successfully
Types ¶
type CodeExecutor ¶
type CodeExecutor struct {
Language ExecutionLanguage
Tools []tools.Tool
Timeout time.Duration
WorkDir string
Mode ExecutionMode
// contains filtered or unexported fields
}
CodeExecutor handles the execution of programmatic tool calling code
func NewCodeExecutor ¶
func NewCodeExecutor(language ExecutionLanguage, toolList []tools.Tool) *CodeExecutor
NewCodeExecutor creates a new code executor for PTC Default mode is ModeDirect for simplicity
func NewCodeExecutorWithMode ¶
func NewCodeExecutorWithMode(language ExecutionLanguage, toolList []tools.Tool, mode ExecutionMode) *CodeExecutor
NewCodeExecutorWithMode creates a new code executor with specified execution mode
func (*CodeExecutor) Execute ¶
func (ce *CodeExecutor) Execute(ctx context.Context, code string) (*ExecutionResult, error)
Execute runs the generated code with access to tools
Example ¶
package main
import (
"context"
"fmt"
"github.com/smallnest/langgraphgo/ptc"
"github.com/tmc/langchaingo/tools"
)
// MockTool for testing
type MockTool struct {
name string
description string
response string
}
func (t MockTool) Name() string {
return t.name
}
func (t MockTool) Description() string {
return t.description
}
func (t MockTool) Call(ctx context.Context, input string) (string, error) {
return t.response, nil
}
func main() {
tools := []tools.Tool{
MockTool{
name: "get_data",
description: "Gets some data",
response: `{"value": 100}`,
},
}
executor := ptc.NewCodeExecutor(ptc.LanguagePython, tools)
ctx := context.Background()
executor.Start(ctx)
defer executor.Stop(ctx)
code := `
# Process data
data = {"numbers": [1, 2, 3, 4, 5]}
total = sum(data["numbers"])
print(f"Total: {total}")
`
result, err := executor.Execute(ctx, code)
if err != nil {
fmt.Printf("Error: %v\n", err)
return
}
fmt.Printf("Executed successfully: %t\n", result.Output != "")
}
Output: Executed successfully: true
func (*CodeExecutor) GetToolDefinitions ¶
func (ce *CodeExecutor) GetToolDefinitions() string
GetToolDefinitions returns tool definitions for LLM prompting
func (*CodeExecutor) GetToolServerURL ¶
func (ce *CodeExecutor) GetToolServerURL() string
GetToolServerURL returns the URL of the tool server In Server mode, this URL is exposed to user code In Direct mode, returns URL for internal use (not exposed to user)
func (*CodeExecutor) Start ¶
func (ce *CodeExecutor) Start(ctx context.Context) error
Start starts the code executor and its tool server In both modes, the server is started for tool access: - Direct mode: Internal server for generic tools (not exposed in wrappers) - Server mode: Server URL exposed to user code
type ExecutionLanguage ¶
type ExecutionLanguage string
ExecutionLanguage defines the programming language for code execution
Example ¶
Example of using PTC with different execution languages
package main
import (
"context"
"fmt"
"github.com/smallnest/langgraphgo/ptc"
"github.com/tmc/langchaingo/tools"
)
// MockTool for testing
type MockTool struct {
name string
description string
response string
}
func (t MockTool) Name() string {
return t.name
}
func (t MockTool) Description() string {
return t.description
}
func (t MockTool) Call(ctx context.Context, input string) (string, error) {
return t.response, nil
}
func main() {
tools := []tools.Tool{
MockTool{name: "tool1", description: "A tool", response: "response"},
}
// Python executor
pythonExec := ptc.NewCodeExecutor(ptc.LanguagePython, tools)
fmt.Printf("Python executor created: %v\n", pythonExec != nil)
// Go executor
goExec := ptc.NewCodeExecutor(ptc.LanguageGo, tools)
fmt.Printf("Go executor created: %v\n", goExec != nil)
}
Output: Python executor created: true Go executor created: true
const ( LanguagePython ExecutionLanguage = "python" LanguageGo ExecutionLanguage = "go" )
type ExecutionMode ¶
type ExecutionMode string
ExecutionMode defines how tools are executed in the code
const ( // ModeServer: All tools are called via HTTP server (alternative) // - Server URL exposed to user-generated code // - Tools accessed via HTTP calls in Python/Go code // - Better isolation (sandboxed) // - Reliable tool execution ModeServer ExecutionMode = "server" // ModeDirect: Hybrid approach for optimal performance (default, recommended) // - Shell/Python/File tools: Embedded subprocess execution (true local) // - Generic tools: Internal HTTP server (hidden from user code) // - Server starts automatically but not exposed to user // - Best of both worlds: performance + compatibility ModeDirect ExecutionMode = "direct" )
type ExecutionResult ¶
ExecutionResult contains the result of code execution
type PTCAgentConfig ¶
type PTCAgentConfig struct {
// Model is the LLM to use
Model llms.Model
// Tools are the available tools
Tools []tools.Tool
// Language is the execution language for code
Language ExecutionLanguage
// ExecutionMode determines how tools are executed (default: ModeDirect)
// - ModeDirect: Tools are executed directly via subprocess (default)
// - ModeServer: Tools are executed via HTTP server (alternative)
ExecutionMode ExecutionMode
// SystemPrompt is the system prompt for the agent
SystemPrompt string
// MaxIterations is the maximum number of iterations (default: 10)
MaxIterations int
}
PTCAgentConfig configures a PTC agent
type PTCToolNode ¶
type PTCToolNode struct {
Executor *CodeExecutor
}
PTCToolNode is a graph node that handles programmatic tool calling It receives code from the LLM and executes it with tool access
func NewPTCToolNode ¶
func NewPTCToolNode(language ExecutionLanguage, toolList []tools.Tool) *PTCToolNode
NewPTCToolNode creates a new PTC tool node with default execution mode (direct)
func NewPTCToolNodeWithMode ¶
func NewPTCToolNodeWithMode(language ExecutionLanguage, toolList []tools.Tool, mode ExecutionMode) *PTCToolNode
NewPTCToolNodeWithMode creates a new PTC tool node with specified execution mode
type ToolRequest ¶
ToolRequest represents a tool execution request
type ToolResponse ¶
type ToolResponse struct {
Success bool `json:"success"`
Result string `json:"result"`
Error string `json:"error,omitempty"`
Tool string `json:"tool"`
Input any `json:"input"`
}
ToolResponse represents a tool execution response
type ToolServer ¶
type ToolServer struct {
// contains filtered or unexported fields
}
ToolServer provides an HTTP API for tool execution This allows code in any language to call Go tools via HTTP
Example (RequestFormat) ¶
Example of tool server request/response format
package main
import (
"encoding/json"
"fmt"
)
func main() {
request := map[string]any{
"tool_name": "calculator",
"input": "2 + 2",
}
requestJSON, _ := json.MarshalIndent(request, "", " ")
fmt.Printf("Tool Request:\n%s\n", string(requestJSON))
response := map[string]any{
"success": true,
"result": "4",
"tool": "calculator",
"input": "2 + 2",
}
responseJSON, _ := json.MarshalIndent(response, "", " ")
fmt.Printf("\nTool Response:\n%s\n", string(responseJSON))
}
Output: Tool Request: { "input": "2 + 2", "tool_name": "calculator" } Tool Response: { "input": "2 + 2", "result": "4", "success": true, "tool": "calculator" }
func NewToolServer ¶
func NewToolServer(toolList []tools.Tool) *ToolServer
NewToolServer creates a new tool server
func (*ToolServer) GetBaseURL ¶
func (ts *ToolServer) GetBaseURL() string
GetBaseURL returns the base URL of the server
func (*ToolServer) GetPort ¶
func (ts *ToolServer) GetPort() int
GetPort returns the port the server is listening on