README
¶
Tool Call Channel Feature
Overview
The Tool Call Channel feature allows you to monitor and track each tool calling step in real-time during LLM iterations. This is useful for debugging, logging, monitoring, and building external systems that need to track tool execution.
Key Components
1. ToolCallStep Struct
type ToolCallStep struct {
Iteration int // Current iteration number (0-based)
ToolName string // Name of the tool being called
ToolArguments string // Arguments passed to the tool (JSON string)
ToolResult string // Result returned from the tool
Timestamp time.Time // When this step occurred
}
2. WithToolCallChannel Option
func (llm *LLMContainer) WithToolCallChannel(toolCallChannel chan ToolCallStep) LLMCallOption
Sets a channel to receive tool call step information during LLM iterations.
How It Works
-
Iteration Loop: When the LLM makes tool calls, it iterates until either:
- The LLM returns a response with no tool calls (final answer)
- Maximum iterations reached (default: 10)
-
Channel Broadcasting: After each tool is executed, a
ToolCallStepis sent to the channel (if configured) containing:- Current iteration number
- Tool name
- Tool arguments (as JSON string)
- Tool result
- Timestamp
-
Async Sending: Tool call steps are sent asynchronously via goroutines with panic recovery to prevent channel issues from affecting the main flow.
Usage Example
package main
import (
"context"
"fmt"
"log"
aillm "github.com/rezaice07/aillm/controller"
"github.com/tmc/langchaingo/llms"
)
func main() {
// Initialize LLM container
llm := aillm.LLMContainer{}
llm.LLMClient = &aillm.OpenAIClient{
Config: aillm.LLMConfig{
APIToken: "your-api-token",
AiModel: "gpt-4o-mini",
},
}
if err := llm.Init(); err != nil {
log.Fatal(err)
}
// Create a buffered channel for tool call steps
toolCallChannel := make(chan aillm.ToolCallStep, 10)
// Monitor tool calls in a separate goroutine
go func() {
for step := range toolCallChannel {
fmt.Printf("Iteration %d: %s(%s) = %s\n",
step.Iteration,
step.ToolName,
step.ToolArguments,
step.ToolResult)
}
}()
// Define your tools
tools := aillm.AillmTools{
Handlers: map[string]func(interface{}) (string, error){
"calculator": func(args interface{}) (string, error) {
// Your tool implementation
return "42", nil
},
},
Tools: []llms.Tool{
// Your tool definitions
},
}
// Make LLM call with tool call monitoring
result, err := llm.AskLLM(
"What is 6 times 7?",
llm.WithTools(tools),
llm.WithToolCallChannel(toolCallChannel),
llm.WithMaxToolCallIteration(5),
)
if err != nil {
log.Fatal(err)
}
// Close channel when done
close(toolCallChannel)
fmt.Printf("Final answer: %s\n", result.Response.Choices[0].Content)
}
Use Cases
1. Real-time Monitoring
Monitor tool execution in real-time for debugging or logging purposes.
go func() {
for step := range toolCallChannel {
log.Printf("[Tool Execution] %s at iteration %d", step.ToolName, step.Iteration)
}
}()
2. Building UI Progress Indicators
Update UI with tool execution progress.
go func() {
for step := range toolCallChannel {
updateUI(fmt.Sprintf("Calling %s...", step.ToolName))
}
}()
3. Analytics and Metrics
Collect metrics on tool usage and performance.
type ToolMetrics struct {
CallCount map[string]int
AvgDuration map[string]time.Duration
}
metrics := &ToolMetrics{
CallCount: make(map[string]int),
}
go func() {
for step := range toolCallChannel {
metrics.CallCount[step.ToolName]++
// Additional analytics...
}
}()
4. External System Integration
Send tool execution events to external monitoring systems.
go func() {
for step := range toolCallChannel {
// Send to external monitoring system
sendToMonitoring(step)
}
}()
Best Practices
-
Buffered Channels: Use buffered channels to prevent blocking during rapid tool calls:
toolCallChannel := make(chan aillm.ToolCallStep, 10) -
Always Close: Close the channel after your LLM operation completes:
defer close(toolCallChannel) -
Goroutine for Reading: Always read from the channel in a separate goroutine to avoid blocking the main execution.
-
Error Handling: The channel sending is protected with panic recovery, but you should still handle channel closure appropriately.
-
Channel Size: Size your buffer based on expected tool call volume:
- Low volume (1-5 tools): Buffer of 5-10
- High volume (10+ tools): Buffer of 20-50
- Continuous monitoring: Consider unbuffered with dedicated reader
Configuration Options
WithMaxToolCallIteration
Control the maximum number of iterations:
llm.AskLLM(query,
llm.WithTools(tools),
llm.WithMaxToolCallIteration(10), // Default is 10 if not specified
llm.WithToolCallChannel(channel),
)
Thread Safety
- Channel sends are protected with panic recovery
- Goroutines are used for async sending to prevent blocking
- No shared state between iterations
Performance Considerations
- Minimal overhead: Channel sends are asynchronous
- No impact on LLM execution flow
- Buffered channels prevent blocking
- Panic recovery ensures robustness
Related Options
WithTools(): Provide tools for the LLM to useWithMaxToolCallIteration(): Set maximum iteration limitWithStreamingFunc(): Stream LLM response chunksWithActionCallFunc(): Monitor LLM action lifecycle
Troubleshooting
Channel Blocking
If your application hangs, ensure you're reading from the channel in a goroutine.
Missed Events
If you're missing tool call events, increase your channel buffer size.
Panic on Closed Channel
The library handles this automatically with panic recovery, but ensure you close channels only after all operations complete.
Advanced Example: Complete Monitoring System
type ToolMonitor struct {
channel chan aillm.ToolCallStep
metrics map[string]*ToolMetric
stopChan chan struct{}
}
type ToolMetric struct {
Calls int
LastCall time.Time
TotalTime time.Duration
}
func NewToolMonitor() *ToolMonitor {
m := &ToolMonitor{
channel: make(chan aillm.ToolCallStep, 50),
metrics: make(map[string]*ToolMetric),
stopChan: make(chan struct{}),
}
go m.monitor()
return m
}
func (m *ToolMonitor) monitor() {
for {
select {
case step, ok := <-m.channel:
if !ok {
return
}
if m.metrics[step.ToolName] == nil {
m.metrics[step.ToolName] = &ToolMetric{}
}
metric := m.metrics[step.ToolName]
metric.Calls++
metric.LastCall = step.Timestamp
fmt.Printf("[%s] Tool: %s, Iteration: %d\n",
step.Timestamp.Format("15:04:05"),
step.ToolName,
step.Iteration)
case <-m.stopChan:
return
}
}
}
func (m *ToolMonitor) GetChannel() chan aillm.ToolCallStep {
return m.channel
}
func (m *ToolMonitor) Stop() {
close(m.channel)
close(m.stopChan)
}
func (m *ToolMonitor) PrintStats() {
fmt.Println("\n=== Tool Usage Statistics ===")
for name, metric := range m.metrics {
fmt.Printf("%s: %d calls, last at %s\n",
name,
metric.Calls,
metric.LastCall.Format("15:04:05"))
}
}
// Usage:
monitor := NewToolMonitor()
defer monitor.Stop()
result, err := llm.AskLLM(
query,
llm.WithTools(tools),
llm.WithToolCallChannel(monitor.GetChannel()),
llm.WithMaxToolCallIteration(10),
)
monitor.PrintStats()
Conclusion
The Tool Call Channel feature provides powerful real-time monitoring capabilities for tool execution during LLM iterations. It's designed to be non-intrusive, performant, and easy to integrate into existing applications.
Documentation
¶
There is no documentation for this package.