CORTEX

CORTEX is a high-performance AI Agent framework built in Go, engineered for the efficient integration and orchestration of Large Language Models (LLMs).
Overview
· Features
· Quick Start
· Core Components
· Tools
· Memory System
· Examples
· License
English | 简体中文
Overview
By bridging the simplicity of a lightweight framework with the robustness of Go, CORTEX enables seamless integration with leading LLMs. It provides a comprehensive toolkit for building AI agents capable of complex tool execution and reasoning.
Designed for production environments, CORTEX prioritizes reliability, configurability, and resource efficiency. It empowers developers to build next-generation AI applications with the performance and safety guarantees inherent to the Go ecosystem.
Design Philosophy: While sharing core capabilities with n8n's AI Agent, CORTEX adopts a minimalist approach. It eliminates the heavy dependencies and complex orchestration overhead of n8n, focusing instead on streamlined integration and low resource footprint. This makes it the ideal choice for developers who need powerful Agent capabilities without the bloat of a full-fledged workflow automation platform.
Features
- Intelligent Agent Engine: A robust core for building agents with advanced reasoning and tool-calling capabilities.
- Broad LLM Support: Seamless integration with OpenAI, DeepSeek, Volce, and custom providers.
- Multi-Modal Native: Effortlessly process and generate text, images, and other media formats.
- Dynamic Skills: File-system-based skill management with Lazy Loading for optimal performance.
- Extensible Tooling: Built-in support for MCP and HTTP clients, making tool extension trivial.
- Real-Time Streaming: Full support for response streaming, enabling interactive, low-latency user experiences.
- Hybrid Memory Architecture: Implements a hybrid strategy combining full conversation history with rolling summaries. This approach optimizes token usage while retaining full context, backed by asynchronous compression to ensure low latency under high concurrency. Compatible with LangChain, MongoDB, Redis, MySQL, and SQLite.
- Granular Configuration: Extensive options to fine-tune agent behavior and performance.
- Parallel Execution: Efficiently execute multiple tool calls concurrently to minimize wait times.
- Production-Grade Reliability: Comprehensive error handling and retry mechanisms built for stability.
Architecture Overview
Cortex follows a modular architecture with the following key components:
Note: The agent package is built on top of LangChain, leveraging its powerful LLM interaction and tool-calling capabilities to build intelligent agent systems.
cortex/
├── agent/ # Core agent functionality
│ ├── engine/ # Agent engine implementation
│ ├── llm/ # LLM provider integrations
│ ├── skills/ # Skill loading and management
│ ├── tools/ # Tool ecosystem (MCP, HTTP)
│ ├── types/ # Core type definitions
│ ├── providers/ # External service providers
│ ├── errors/ # Error handling
│ └── logger/ # Structured logging
├── trigger/ # Trigger modules
│ ├── http/ # HTTP trigger (REST API)
│ └── mcp/ # MCP trigger (MCP server)
└── examples/ # Example applications
Quick Start
Installation
go get github.com/xichan96/cortex
Minimal Example
Create a weather-checking AI agent in seconds:
package main
import (
"fmt"
"time"
"github.com/xichan96/cortex/agent/engine"
"github.com/xichan96/cortex/agent/llm"
"github.com/xichan96/cortex/agent/types"
)
func main() {
// 1. Initialize LLM Provider
llmProvider, err := llm.OpenAIClient("your-api-key", "gpt-4o-mini")
if err != nil {
panic(err)
}
// 2. Configure Agent
agentConfig := types.NewAgentConfig()
agentConfig.SystemMessage = "You are a helpful AI assistant."
agentConfig.Timeout = 30 * time.Second
// 3. Create Engine
agentEngine := engine.NewAgentEngine(llmProvider, agentConfig)
// 4. Execute
result, err := agentEngine.Execute("What is the weather in New York?", nil)
if err != nil {
fmt.Printf("Execution failed: %v\n", err)
return
}
fmt.Printf("Response: %s\n", result.Output)
}
Run the Service
Cortex ships with a ready-to-deploy HTTP server:
# Run with default config
go run cortex.go
# Run with custom config
go run cortex.go -config /path/to/cortex.yaml
Default endpoints (port :5678):
POST /chat: Standard chat
POST /chat/stream: Streaming chat (SSE)
ANY /mcp: MCP Protocol endpoint
Core Components
LLM Integration
Unified interface for major LLM providers:
// OpenAI
llmProvider, _ := llm.OpenAIClient("sk-...", "gpt-4o")
// DeepSeek
llmProvider, _ := llm.QuickDeepSeekProvider("sk-...", "deepseek-chat")
// Volcengine
llmProvider, _ := llm.VolceClient("ak-...", "doubao-pro-32k")
Triggers
Expose your agent via different protocols.
HTTP Trigger
Standard RESTful API for chat and streaming.
MCP Trigger
Fully compliant with the Model Context Protocol (MCP), allowing your agent to serve as a tool for MCP clients (e.g., Claude Desktop).
Extensive built-in tool library:
- MCP Tools: Connect to any MCP Server.
- Web Search: Integrated search engines.
- File Operations: Safe filesystem access.
- SSH: Remote server management.
- Email: Send HTML/Text emails.
- Math: Complex calculations.
- System Command: Secure shell execution.
Implement the types.Tool interface to extend capabilities:
type MyTool struct{}
func (t *MyTool) Name() string { return "my_tool" }
func (t *MyTool) Description() string { return "A custom tool" }
func (t *MyTool) Execute(input map[string]interface{}) (interface{}, error) {
// Business logic...
return "Result", nil
}
Memory System
Cortex features an advanced Hybrid Memory Architecture designed for long-running conversations.
Key Features
- Raw History: Preserves every interaction for complete auditability.
- Rolling Summary: Asynchronously generates concise summaries of past conversations.
- Smart Retrieval: Dynamically constructs prompts using "Summary + Recent Context" to maximize information density within token limits.
- Async Processing: Summary generation happens in the background with automatic panic recovery, ensuring zero latency impact on user interactions.
Storage Backends
Switch storage with a single line of config:
- Memory (Default): Ephemeral, for testing.
- Redis: High-performance KV store (Recommended for production).
- MongoDB: Flexible document store.
- MySQL / SQLite: Relational database support.
// Example: Redis Memory
redisClient := redis.NewClient(&redis.Options{Addr: "localhost:6379"})
memory := providers.NewRedisMemoryProvider(redisClient, "session-id")
agentEngine.SetMemory(memory)
Examples
Contributing
Issues and Pull Requests are welcome!
License
MIT License