Agently
Agently is a Go framework for building and interacting with AI agents. It provides a flexible and extensible platform for creating, managing, and communicating with AI agents powered by Large Language Models (LLMs).
Features
- Agent-based Architecture: Create and manage AI agents with different capabilities and personalities
- Multi-LLM Support: Integrate with various LLM providers including OpenAI, Vertex AI, Bedrock, and more
- Conversation Management: Maintain conversation history and context across interactions
- Tool Integration: Extend agent capabilities with custom tools
- Embeddings: Support for text embeddings for semantic search and retrieval
- CLI Interface: Interact with agents through a command-line interface
- HTTP Server: Deploy agents as web services
- Orchestration (Decoupled): Agent turns are executed without a Fluxor runtime. MCP tools and internal services are coordinated directly by Agently.
Installation
Prerequisites
Installing
go get github.com/viant/agently
Quick Start
# Set your OpenAI API key
export OPENAI_API_KEY=your_key
# Clone the repository
git clone https://github.com/viant/agently.git
cd agently/agently
# Set the Agently root directory (defaults to ~/.agently if not set)
export AGENTLY_ROOT=./agently_workspace
# Create the directory
mkdir -p $AGENTLY_ROOT
# Build the application
go build -o agently .
# Check available commands
./agently -h
# Start a agently webservice on :8080 port
./agently serve
# Start a chat cli session
./agently chat
How to run MCP server
To run an MCP (Model Context Protocol) server with SQLKit support:
# Clone the MCP SQLKit repository
git clone https://github.com/viant/mcp-sqlkit.git
# Navigate to the project directory
cd mcp-sqlkit
# Start the MCP server on port 5000
go run ./cmd/mcp-sqlkit -a :5000
The server will be available at http://localhost:5000 and can be used with Agently for database operations.
This guide will help you set up Agently with the MCP server and SQLKit tool for database operations.
Prerequisites
- Complete the Quick Start steps above to set up Agently
- Have a MySQL server running (this example uses a local MySQL server)
-
First, ensure both Agently and any running MCP servers are stopped.
-
Create the MCP configuration file:
# Create the mcp directory if it doesn't exist
mkdir -p $AGENTLY_ROOT/mcp
# Create the SQLKit configuration file
cat > $AGENTLY_ROOT/mcp/sqlkit.yaml << EOF
name: sqlkit
version: ""
protocol: ""
namespace: ""
transport:
type: sse
command: ""
arguments: []
url: http://localhost:5000
auth: null
EOF
Update your agent configuration to include the SQLKit tool:
# Create or update the chat agent configuration
mkdir -p $AGENTLY_ROOT/agents
cat > $AGENTLY_ROOT/agents/chatter.yaml << EOF
description: Default conversational agent
id: chat
modelRef: openai_o4-mini
name: Chat
orchestrationFlow: workflow/orchestration.yaml
temperature: 0
tool:
- pattern: system/exec
- pattern: sqlkit
knowledge:
- url: knowledge/
EOF
Step 3: Start the Services
- Start the MCP server with SQLKit (follow the "How to run MCP server" steps above)
- Start Agently (run
./agently chat)
Step 4: Test Database Connectivity
Ensure your MySQL server is running. For this example, we assume:
port: 3306
user: root
password: dev
database: my_db
Step 5: Query Your Database
Send a query to test the database connection:
> tell how many tables do we have in my_db db?
A configuration wizard will guide you through setting up the MySQL connector:
-
When prompted for connector name, enter: dev
-
For driver, enter: mysql
-
For host, enter: localhost
-
For connector name, accept the default: dev
-
For port, enter your MySQL port (e.g., 3306)
-
For project, just press Enter to skip (not needed for MySQL)
-
For DB/Dataset, enter: my_db
-
For flowURI:
8.1 open given link in browser and submit user and password (e.g., root and dev)
8.2 enter any value or accept the default
After configuration, Agently will execute your query and return the result, such as:
There are 340 base tables in the my_db schema when accessed via the dev connector.
You can now use natural language to query your database through Agently!
Usage
Match Defaults (auto full vs match)
You can control auto full vs match behavior and result capping via a single default in your workspace config:
default:
match:
# Used when a knowledge/MCP entry doesn't specify maxFiles.
# Also drives the auto decision: if a location has more files than this,
# the runtime switches to Embedius match; otherwise it loads files directly (full).
maxFiles: 5
Notes:
- minScore (when provided on a knowledge/MCP entry) only filters results in match mode; it does not force match mode.
- URIs are normalized (TrimPath) for stable references and better token caching.
- System documents are injected as separate system messages (content only) and are not rendered in system.tmpl to avoid duplication.
HTTP API (v1)
The embedded server exposes a simple chat API under /v1/api:
-
Create a conversation:
curl -s -X POST http://localhost:8080/v1/api/conversations | jq
# { "status": "ok", "data": { "id": "..." } }
-
Post a message to a conversation:
curl -s -X POST \
-H 'Content-Type: application/json' \
-d '{"text":"Hello"}' \
http://localhost:8080/v1/api/conversations/CONV_ID/messages | jq
-
Get conversation messages and status:
curl -s http://localhost:8080/v1/api/conversations/CONV_ID/messages | jq
-
Fetch payload bytes (e.g., attachments or large LLM outputs):
# raw bytes (204 when empty)
curl -s -L 'http://localhost:8080/v1/api/payload/PAYLOAD_ID?raw=1' > body.bin
# JSON envelope without inline body
curl -s 'http://localhost:8080/v1/api/payload/PAYLOAD_ID?meta=1' | jq
Configuration (environment variables)
Command Line Interface
Agently provides a command-line interface for interacting with agents:
# Chat with an agent
agently chat
# Chat with an agent with a specific query
agently chat -l <agent-location> -q "Your query here"
# Continue a conversation
agently chat -l <agent-location> -c <conversation-id>
# List existing conversations
agently list
# Manage MCP servers
agently mcp list # view configured servers
agently mcp add -n local -t stdio \
--command "my-mcp" --arg "--flag"
agently mcp add -n cloud -t sse --url https://mcp.example.com/sse
agently mcp remove -n local
# List available tools (names & descriptions)
agently list-tools
# Show full JSON definition for a tool
agently list-tools -n system/exec.execute --json
# Run an agentic workflow from JSON input
agently run -i <input-file>
# Start HTTP server
agently serve
# Workspace management (new)
Agently stores all editable resources under **`$AGENTLY_ROOT`** (defaults to
`~/.agently`). Each kind has its own sub-folder:
~/.agently/
agents/ # *.yaml agent definitions
models/ # LLM or embedder configs
workflows/ # (deprecated)
mcp/ # MCP client definitions
Use the generic ws command group to list, add or remove any resource kind:
# List agents
agently ws list -k agent
# Add a model from file
agently ws add -k model -n gpt4o -f gpt4o.yaml
# Get raw YAML for workflow
agently ws get -k workflow -n plan_exec_finish
# Delete MCP server definition
agently ws remove -k mcp -n local
Clearing agents cache
- Close all running agents.
- Delete content of ~/.emb
Model convenience helpers
# Switch default model for an agent
agently model-switch -a chat -m gpt4o
# Reset agent to inherit executor default
agently model-reset -a chat
MCP helpers (now stored in workspace)
# Add/update server definition (stored as ~/.agently/mcp/local.yaml)
agently mcp add -n local -t stdio --command my-mcp
# List names or full JSON objects (with --json)
agently mcp list [--json]
# Remove definition
agently mcp remove -n local
## Forge UI
Agently embeds a Forge-based web UI with data-driven menus and windows.
- Endpoints:
- `GET /v1/workspace/metadata` — aggregated workspace metadata (defaults, agents, tools, models). Used by windows to populate forms and menus.
- `GET /v1/api/agently/forge/*` — serves embedded Forge metadata (navigation and window definitions) from `metadata/`.
- Navigation pattern (`metadata/navigation.yaml`):
- Define a menu node and point `windowKey` to a window definition under `metadata/window/...`.
Example:
```yaml
- id: tools
label: Tools
icon: function
childNodes:
- id: list
label: Catalogue
icon: list
windowKey: tool
windowTitle: Tools
Customize menus by editing metadata/navigation.yaml and add new windows under metadata/window/. The server automatically serves these via /v1/api/agently/forge/.
Options
Templated agent queries (velty)
Agently supports rendering prompts with the velty template engine so you can
compose rich queries from structured inputs.
Where you can use templates:
- Agent prompts can be templated (Velty). Templates render with variables:
- Prompt – the original query string
- everything from context – each key becomes a template variable
Example (templated prompt):
query: "Initial feature brief"
queryTemplate: |
Design this feature based on:\n
Brief: ${Prompt}\n
Product: ${productName}\n
Constraints: ${constraints}
context:
productName: "Acme WebApp"
constraints: "Ship MVP in 2 sprints"
In addition, the core generation service uses the same unified templating for
Template (with ${Prompt}) and SystemTemplate (with ${SystemPrompt}) together
with any values placed in Bind.
-f, --config: Executor config YAML/JSON path
-l, --location: Agent definition path
-q, --query: User query
-c, --conv: Conversation ID (optional)
-p, --policy: Tool policy: auto|ask|deny (default: auto)
-t, --timeout: Timeout in seconds for the agent response (0=none)
--log: Unified log (LLM, TOOL, TASK) (default: agently.log)
mcp list / add / remove
-n, --name – Tool name (service/method) to display full schema
--json – Print tool definitions in JSON (applies to single or all tools)
exec
-n, --name – Tool name to execute
-i, --input – Inline JSON arguments (object)
-f, --file – Path to JSON file with arguments (use - for STDIN)
--timeout – Seconds to wait for completion (default 120)
--json – Print result as JSON
Example (properly quoted for Bash/Zsh):
./agently exec -n system/exec.execute \
-i '{"commands":["echo '\''hello'\''"]}'
Development
Project Structure
cmd/agently: Command-line interface
genai/agent: Agent-related functionality
genai/conversation: Conversation management
genai/embedder: Text embedding functionality
genai/executor: Executes agent tasks or workflows
genai/extension: Extensions or plugins
genai/llm: Large Language Model integration
genai/memory: Conversation memory or history
genai/tool: Tools or utilities for agents
Further documentation
For an in-depth walkthrough of how Agently processes a request – from the CLI
invocation through agent resolution, planning, LLM call and response – see
docs/agent_flow.md. The document also explains the $AGENTLY_ROOT workspace
mechanism introduced in the 2025-06 release.
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Acknowledgments
This product includes software developed at Viant (http://viantinc.com/).