Feature List β’
Typical Application Scenarios β’
Quick Start β’
Using Wanwu β’
Q & A β’
Contact Us
English |
δΈζ |
ηΉι«δΈζ
Wanwu AI Agent Platform is an enterprise-grade one-stop commercially friendly AI agent development platform designed for business scenarios. It is committed to providing enterprises with a safe, efficient, and compliant one-stop AI solution. With the core philosophy of "technology openness and ecological co-construction", we integrate cutting-edge technologies such as large language models and business process automation to build an AI engineering platform with a complete functional system covering model full life-cycle management, MCP, web search, AI agent rapid development, enterprise knowledge base construction, and complex workflow orchestration. The platform adopts a modular architecture design, supports flexible functional expansion and secondary development, and greatly reduces the application threshold of AI technology while ensuring the security and privacy protection of enterprise data. Whether it is for small and medium-sized enterprises to quickly build intelligent applications or for large enterprises to achieve intelligent transformation of complex business scenarios, the Wanwu AI Agent Platform can provide strong technical support to help enterprises accelerate the process of digital transformation, achieve cost reduction and efficiency improvement, and business innovation.
π₯ Adopt a permissive and friendly Apache 2.0 License, supporting developers to freely expand and develop secondary
β Enterprise-level engineering: Provides a complete toolchain from model management to application landing, solving the "last mile" problem of LLM technology landing
β Open-source ecological: Adopt a permissive and friendly Apache 2.0 License, supporting developers to freely expand and develop
β Full-stack technology support: Equipped with a professional team to provide architecture consulting, performance optimization and full-cycle empowerment for ecological partners
β Multi-tenant architecture: Provides a multi-tenant account system to meet the core needs of users in cost control, data security isolation, business elasticity expansion, industry customization, rapid online and ecological collaboration
β XinChuang adaptation: Already adapted to domestic XinChuang databases TiDB and OceanBase
π© Core Function Modules
1. Model Management (Model Hub)
βΈ Supports the unified access and lifecycle management of hundreds of proprietary/open-source large models (including GPT, Claude, Llama, etc.)
βΈ Deeply adapts to OpenAI API standards and Unicom Yuanjing ecological models, realizing seamless switching of heterogeneous models
βΈ Provides multi-inference backend support (vLLM, TGI, etc.) and self-hosted solutions to meet the computing power needs of enterprises of different scales
2. MCP
βΈ Standardized interfaces: Enable AI models to seamlessly connect to various external tools (such as GitHub, Slack, databases, etc.) without the need to develop adapters for each data source separately
βΈ Built-in rich and selected recommendations: Integrates 100+ industry MCP interfaces, making it easy for users to call up quickly and easily
3. Web Search
βΈ Real-time information acquisition: Possesses powerful web search capabilities, capable of obtaining the latest information from the Internet in real-time. In question and answer scenarios, when a user's question requires the latest news, data, and other information, the platform can quickly search and return accurate results, enhancing the timeliness and accuracy of the answers
βΈ Multi-source data integration: Integrates various Internet data sources, including news websites, academic databases, industry reports, etc. Through the integration and analysis of multi-source data, it provides users with more comprehensive and in-depth information. For example, in market research scenarios, relevant data can be obtained from multiple data sources at the same time for comprehensive analysis and evaluation
βΈ Intelligent search strategy: Adopt intelligent search algorithms, automatically optimize search strategies based on user questions to improve search efficiency and accuracy. Support keyword search, semantic search and other search methods to meet the needs of different users. At the same time, intelligently sort and filter search results, prioritize the display of the most relevant and valuable information
4. Visual Workflow (Workflow Studio)
βΈ Quickly build complex AI business processes through low-code drag-and-drop canvas
βΈ Built-in conditional branching, API, large model, knowledge base, code, MCP and other nodes, support end-to-end process debugging and performance analysis
5. Enterprise-level Knowledge Base, RAG Pipeline
βΈ Provides the whole process knowledge management capabilities of knowledge base creation β document parsing β vectorization β retrieval β fine sorting, supports multiple formats such as pdf/docx/txt/xlsx/csv/pptx documents, and also supports the capture and access of web resources
βΈ Integrates multi-modal retrieval, cascading segmentation and adaptive segmentation, significantly improves the accuracy of Q&A
6. AI Agent Development Framework (Agent Framework)
βΈ Can be based on the function call (Function Calling) agent construction paradigm, supports tool expansion, private knowledge base association and multi-round dialogue
βΈ Support online debugging
7. Backend as a Service (BaaS)
βΈ Provides RESTful API, supports deep integration with existing enterprise systems (OA/CRM/ERP, etc.)
βΈ Provides fine-grained permission control to ensure stable operation in production environments
π’ Function Comparison
| Function |
Wanwu |
Dify.AI |
Fastgpt |
Ragflow |
Coze open source version |
| Model import |
β
|
β
|
β(Built-in models) |
β
|
β(Built-in models) |
| RAG engine |
β
|
β
|
β
|
β
|
β
|
| MCP |
β
|
β
|
β
|
β
(Need to install tools to use) |
β |
| Direct OCR import |
β
|
β |
β |
β |
β |
| Search enhancement |
β
|
β
(Need to install tools to use) |
β
|
β
(Need to install tools to use) |
β
|
| Agent |
β
|
β
|
β
|
β
|
β
|
| Workflow |
β
|
β
|
β
|
β
|
β
|
| Local deployment |
β
|
β
|
β
|
β
|
β
|
| license friendly |
β
|
β(Commercially restricted) |
β(Commercially restricted) |
Not fully open source |
β
|
| Multi-tenant |
β
|
β(Commercially restricted) |
β(Commercially restricted) |
β
|
β
(Users are not interconnected) |
As of August 1, 2025.
π― Typical Application Scenarios
- Intelligent Customer Service: Realize high-accuracy business consultation and ticket processing based on RAG + Agent
- Knowledge Management: Build an exclusive enterprise knowledge base, support semantic search and intelligent summary generation
- Process Automation: Realize AI-assisted decision-making for business processes such as contract review and reimbursement approval through the workflow engine
The platform has been successfully applied in multiple industries such as finance, industry, and government, helping enterprises transform the theoretical value of LLM technology into actual business benefits. We sincerely invite developers to join the open source community and jointly promote the democratization of AI technology.
π Quick Start
- The workflow module of the Wanwu AI Agent Platform uses the following project, you can go to its warehouse to view the details.
- Docker Installation (Recommended)
-
Before the first run
1.1 Copy the environment variable file
cp .env.bak .env
1.2 Modify the WANWU_ARCH and WANWU_EXTERNAL_IP variables in the .env file according to the system
# amd64 / arm64
WANWU_ARCH=amd64
# external ip port (Note: localhost should be replaced with the local area network or external IP of the machine, such as 192.168.0.xx, and cannot be localhost or 127.0.0.1)
WANWU_EXTERNAL_IP=localhost
1.3 Create a Docker running network
docker network create wanwu-net
-
Start the service (the image will be automatically pulled from Docker Hub during the first run)
# For amd64 system:
docker compose --env-file .env --env-file .env.image.amd64 up -d
# For arm64 system:
docker compose --env-file .env --env-file .env.image.arm64 up -d
-
Log in to the system: http://localhost:8081
Default user: admin
Default password: Wanwu123456
-
Stop the service
# For amd64 system:
docker compose --env-file .env --env-file .env.image.amd64 down
# For arm64 system:
docker compose --env-file .env --env-file .env.image.arm64 down
- Source Code Start (Development)
- Based on the above Docker installation steps, start the system service completely
- Take the backend bff-service service as an example
2.1 Stop bff-service
make -f Makefile.develop stop-bff
2.2 Compile the bff-service executable file
# For amd64 system:
make build-bff-amd64
# For arm64 system:
make build-bff-arm64
2.3 Start bff-service
make -f Makefile.develop run-bff
β¬οΈ Version Upgrade
- Based on the above Docker installation steps, completely stop the system service
- Update to the latest version of the code
2.1 In the wanwu repository directory, update the code
# Switch to the main branch
git checkout main
# Pull the latest code
git pull
2.2 Recopy the environment variable file (if there are changes to the environment variables, please modify them again)
# Backup the current .env file
cp .env .env.old
# Copy the .env file
cp .env.bak .env
- Based on the above Docker installation steps, completely start the system service
π Using Wanwu
To help you quickly get started with this project, we strongly recommend that you first check out the Documentation Operation Manual. We provide users with interactive and structured operation guides, where you can directly view operation instructions, interface documents, etc., greatly reducing the threshold for learning and use. The detailed function list is as follows:
| Function |
Detailed Description |
| Model Management |
Supports users to import LLM, Embedding, and Rerank models from model suppliers such as Unicom Yujing, OpenAI-API-compatible, Ollama, Tongyi Qianwen, and Volcano Engine. Model Import Method-Detailed Version |
| Knowledge Base |
Document parsing (supports OCR), hit testing, and keyword management |
| Tool Square |
Built-in 100+ selected industry MCP servers, while also supporting importing your own MCP services or custom tools and using them in workflows and agents |
| Safety Barrier |
Users can create a list of sensitive words to control the security of model feedback results |
| Text Q&A |
Exclusive knowledge consultant based on a private knowledge base, supporting knowledge base management, knowledge Q&A, knowledge summary, personalized parameter configuration, safety barrier, search configuration, and other functions to improve the efficiency of knowledge management and learning. Supports public or private release of text Q&A applications |
| Workflow |
Can expand the ability boundary of agents, composed of nodes, providing visual workflow editing capabilities, users can orchestrate multiple different workflow nodes to achieve complex and stable business processes. Supports public or private release of workflow applications |
| Agent |
Create agents based on user usage scenarios and business needs, support selecting models, setting prompts, online search, knowledge base selection, MCP, workflow, custom tools, etc. Supports public or private release of agent applications |
| Application Square |
Supports users to experience the applications that have been released, including text Q&A, workflow, and agents |
| Settings |
The platform supports multi-tenancy, allowing users to manage organizations, roles, users, and platform basic configurations |
π° TO DO LIST
- Multi-modal model access
- Support custom MCP Server, which means that workflows, agents, or APIs that conform to the OpenAPI specification can be added to the MCP Server for release
- Workflow published as API
- Increase workflow node types
- Support import and export of workflows
- Knowledge base sharing
- Agent and model evaluation
- Knowledge base supports segmented content addition
- Agent monitoring statistics
- Model experience
- Prompt engineering
π¬ Q & A
- [Q] Error when starting Elastic (elastic-wanwu) on Linux system: Memory limited without swap.
[A] Stop the service, run
sudo sysctl -w vm.max_map_count=262144, and then restart the service.
- [Q] Error when starting Agent (agent-wanwu) on Windows system: bash: ./start_all.sh: /bin/bash^M: bad interpreter
[A] Disable automatic CRLF line endings in Git by running
git config --global core.autocrlf false. Then stop the service, re-clone the wanwu repository, and restart the service.
- [Q] After the system services start normally, the mysql-wanwu-worker and elastic-wanwu-setup containers exit with status code Exited (0).
[A] This is normal. These two containers are used to complete some initialization tasks and will automatically exit after execution.
- [Q] Regarding model import
[A] Taking the import of Unicom Yuanjing LLM as an example (the process is similar for importing OpenAI-API-compatible models, Embedding, or Rerank types):
1. The Open API interface for Unicom Yuanjing MaaS Cloud LLM is, for example: https://maas.ai-yuanjing.com/openapi/compatible-mode/v1/chat/completions
2. The API Key applied for by the user on Unicom Yuanjing MaaS Cloud looks like: sk-abc********************xyz
3. Confirm that the API and Key can correctly request the LLM. Taking a request to yuanjing-70b-chat as an example:
curl --location 'https://maas.ai-yuanjing.com/openapi/compatible-mode/v1/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--header 'Authorization: Bearer sk-abc********************xyz' \
--data '{
"model": "yuanjing-70b-chat",
"messages": [{
"role": "user",
"content": "δ½ ε₯½"
}]
}'
4. Import the model:
4.1 [Model Name] must be the model that can be correctly requested in the curl command above; for example, yuanjing-70b-chat.
4.2 [API Key] must be the key that can be correctly requested in the curl command above; for example, sk-abc********************xyz (note: do not include the 'Bearer' prefix).
4.3 [Inference URL] must be the URL that can be correctly requested in the curl command above; for example, https://maas.ai-yuanjing.com/openapi/compatible-mode/v1 (note: do not include the /chat/completions suffix).
5. Importing an Embedding model is the same as importing an LLM as described above. Note that the inference URL should not include the /embeddings suffix.
6. Importing a Rerank model is the same as importing an LLM as described above. Note that the inference URL should not include the /rerank suffix.
π Acknowledgments
βοΈ License
The Yuanjing Wanwu AI Agent Platform is released under the Apache License 2.0.
| QQ Group:490071123 |
 |