SPEAR is an advanced AI Agent platform designed to support multiple runtime environments. It provides flexibility and scalability for running AI agent workloads in various configurations. SPEAR is currently in development, with ongoing features and improvements.
Features
| Features |
Support |
Status |
| Runtime Support |
Process |
✅ Supported |
| Docker Container |
✅ Supported |
| WebAssembly |
⏳ Work in Progress |
| Kubernetes |
⏳ Work in Progress |
| Operating Modes |
Local Mode |
✅ Supported |
| Cluster Mode |
⏳ Work in Progress |
| Deployment |
Auto Deployment |
⏳ Work in Progress |
| Agent Service |
Planning |
⏳ Work in Progress |
| Memory |
| Tools |
-
Runtime Support:
- Process
- Docker Container
- Future Support: WebAssembly and Kubernetes (K8s)
-
Operating Modes:
- Local Mode: Run a single AI agent workload on a local machine.
- Cluster Mode: Designed to support AI agent workloads across multiple clusters. (Not yet implemented)
-
Deployment:
- Auto deployment: Auto Generate configuration files based on programming code.
-
Agent Service:
- Planning: Offer some agent planning technology enhancing agent ability.
- Memory: Provide some memory services to manage the knowledge of the agent.
- Tools: Provide the user with some built-in tools, and allow the user to customize their own tools.
Linux OS installation
Dependencies
SPEAR relies on some other third-party software dependency packages. To install this packages on Linux, use the following command:
python -m pip install --upgrade pip
pip install build
apt install portaudio19-dev libx11-dev libxtst-dev flatbuffers-compiler
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
Build Instructions
To build SPEAR and its related components, run the following command:
make
This command will:
- Compile all required binaries.
- Build Docker images for the related AI Agent workloads.
Usage
To run SPEAR in local mode, use the following command:
export OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
export HUGGINGFACEHUB_API_TOKEN=<YOUR_HUGGINGFACEHUB_API_TOKEN>
export SPEAR_RPC_ADDR=<YOUR_LOCAL_SPEAR_RPC_ADDR>
bin/spearlet exec -n pyconversation
This command will:
- Start the SPEAR spearlet process in local mode.
- Run the AI agent workload with an ID of 6. (pyconversation-local)
Also, you need to set the environment variable OPENAI_API_KEY to your OpenAI API key. In the future, we will support other LLM providers.
Mac OS installation
Dependencies
PortAudio is required for the audio processing component. To install PortAudio on MacOS, use the following command:
brew install portaudio flatbuffers
Build Instructions
To build SPEAR and its related components, run the following command:
make
This command will:
- Compile all required binaries.
- Build Docker images for the related AI Agent workloads.
Usage
To run SPEAR in local mode, use the following command:
# if you are using openai official api, you can set OPENAI_API_BASE=https://api.openai.com/v1
export OPENAI_API_BASE=<YOUR_OPENAI_API_BASE>
export OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
bin/spearlet exec -n pyconversation
This command will:
- Start the SPEAR spearlet process in local mode.
- Run the AI agent workload with an ID of 6. (pyconversation-local)
Also, you need to set the environment variable OPENAI_API_KEY to your OpenAI API key. In the future, we will support other LLM providers.
Development Status
Supported Runtimes:
Planned Runtimes:
Future Plans
- Implementation of cluster mode to enable distributed AI agent workloads across multiple clusters.
- Expansion of runtime support to include WebAssembly and Kubernetes.
Contributing
Contributions are welcome! Please open an issue or submit a pull request to discuss new features, bug fixes, or enhancements.
License
This project is licensed under the Apache License 2.0.