Mattermost Agents Plugin 
The Mattermost Agents Plugin integrates AI capabilities directly into your Mattermost workspace. Run any local LLM on your infrastructure or connect to cloud providers - you control your data and deployment.

Key Features
- Multiple AI Assistants: Configure different agents with specialized personalities and capabilities
- Thread & Channel Summarization: Get concise summaries of long discussions with a single click
- Action Item Extraction: Automatically identify and extract action items from threads
- Meeting Transcription: Transcribe and summarize meeting recordings
- Semantic Search: Find relevant content across your Mattermost instance using natural language
- Smart Reactions: Let AI suggest contextually appropriate emoji reactions
- Direct Conversations: Chat directly with AI assistants in dedicated channels
- Flexible LLM Support: Use local models (Ollama, vLLM, etc.), cloud providers (OpenAI, Anthropic, Azure), or any OpenAI-compatible API
Documentation
Comprehensive documentation is available in the /docs directory:
Installation
- Download the latest release from the releases page. You can also download the experimental latest master
- Upload and enable the plugin through the Mattermost System Console
- Configure your desired LLM provider settings
System Requirements
- Mattermost Server versions:
- v10.0 or later recommended
- v9.11+ (ESR)
- PostgreSQL database with pgvector extension for semantic search capabilities
- Network access to your chosen LLM provider
Quick Start
After installation, complete these steps to get started:
- Navigate to System Console > Plugins > Agents
- Create an agent and configure it with your LLM provider credentials
- Set permissions for who can access the agent
- Open the Agents panel from any channel using the AI icon in the right sidebar
- Start interacting with your AI assistant
For detailed configuration instructions, see the Admin Guide.
Integration
Bridge Client
The plugin provides a Go client library for other Mattermost plugins and the Mattermost server to interact with the AI plugin's LLM Bridge API. This allows you to easily add AI capabilities to your own plugins or server-side features.
See the Bridge Client README for installation and usage instructions.
Development
Prerequisites
- Go 1.24+
- Node.js 20.11+
- Access to an LLM provider (OpenAI, Anthropic, etc.)
Local Setup
-
Setup your Mattermost development environment by following the Mattermost developer setup guide. If you have a remote mattermost server you want to develop to you can skip this step.
-
Setup your Mattermost plugin development environment by following the Plugin Developer setup guide.
-
Clone the repository:
git clone https://github.com/mattermost/mattermost-plugin-ai.git
cd mattermost-plugin-ai
- Optional. If you are developing to a remote server, setup environment variables to deploy:
MM_SERVICESETTINGS_SITEURL=http://localhost:8065
MM_ADMIN_USERNAME=<YOUR_USERNAME>
MM_ADMIN_PASSWORD=<YOUR_PASSWORD>
- Run deploy to build the plugin
make deploy
Other make commands
- Run
make help for a list of all make commands
- Run
make check-style to verify code style
- Run
make test to run the test suite
- Run
make e2e to run the e2e tests
- Run
make evals to run prompt evaluations interactively (with TUI)
- Run
make evals-ci to run prompt evaluations in CI mode (non-interactive)
Benchmark Tests
The streaming code has benchmark tests to measure performance and detect regressions:
# Run all streaming benchmarks
go test -bench=. -benchmem ./llm/... ./streaming/...
# Run specific benchmark
go test -bench=BenchmarkStreamToPost -benchmem ./streaming/...
# Run with CPU profiling
go test -bench=BenchmarkReadAll -cpuprofile=cpu.prof ./llm/...
Benchmarks cover:
ReadAll() stream consumption with varying response sizes
TokenUsageLoggingWrapper interception overhead
StreamToPost() full processing pipeline with WebSocket events
Multi-Provider Evaluation Support
The evaluation system supports testing with multiple LLM providers: OpenAI, Anthropic, and Azure OpenAI. By default, evaluations run against all providers, but you can target specific ones:
# Run with all providers (default)
make evals
# Run with only Anthropic
LLM_PROVIDER=anthropic make evals
# Run with OpenAI and Azure
LLM_PROVIDER=openai,azure make evals
# Use a specific model
ANTHROPIC_MODEL=claude-3-opus-20240229 make evals
See cmd/evalviewer/README.md for complete documentation on environment variables and configuration options.
License
This repository is licensed under Apache-2, except for the server/enterprise directory which is licensed under the Mattermost Source Available License. See Mattermost Source Available License to learn more.