langchaingo

command module
v0.0.0-...-dadeae2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 24, 2025 License: Apache-2.0 Imports: 13 Imported by: 0

README

🧠 Langchaingo + DuckDuckGo with Model Context Protocol (MCP)

This project demonstrates a zero-config application using Langchaingo and the Model Context Protocol (MCP) to answer natural language questions by performing real-time web search via DuckDuckGo — all orchestrated with Docker Compose.

[!Tip] ✨ No configuration needed — run it with a single command.

Langchaingo DuckDuckGo Search Demo

🚀 Getting Started

Requirements
Run the project
docker compose up

No setup, API keys, or additional configuration required.

Test the project
go test -v ./...

This command runs all the tests in the project, using Testcontainers Go to spin up the different containers needed for the tests:

  1. Docker Model Runner: a socat container to forward the model runner's API to the test process. It allows to talk to the local LLM models, provided by Docker Desktop, from the test process.
  2. Docker MCP Gateway: Docker's MCP gateway container to facilitate the access to the MCP servers and tools. It allows to talk to the MCP servers provided by Docker Desktop, in this case DuckDuckGo, from the test process.

No port conflicts happen, thanks to the Testcontainers Go library, which automatically exposes the known ports of the containers on a random, free port in the host. Therefore, you can run the tests as many times as you want, even without stopping the Docker Compose application.

All containers started by Testcontainers Go are automatically cleaned up after the tests finish, so you don't need to worry about cleaning them up manually.

String comparison tests

This test is a simple test that checks if the answer is correct by comparing it to a reference answer. As you can imagine, given the non-deterministic nature of the LLM, this check is not very robust.

Run this test with:

go test -v -run TestChat_stringComparison ./...
Cosine similarity tests

This test is a more robust test that checks if the answer is correct by using the cosine similarity between the reference answer and the answer of the model. To calculate the cosine similarity, the test obtains the embeddings of the reference answer and the answer of the model, and then calculates the cosine similarity between them. If the result is greater than a threshold, which is defined by the team, the test is considered to be passed.

Run this test with:

go test -v -run TestChat_embeddings ./...
RAG tests

This test is a more robust test that checks if the answer is correct by using the RAG technique. It creates a Weaviate store to store the content that will serve as a reference, and it uses the built-in mechanisms in the Vector Database to obtain the most relevant documents to the question. Then, it includes those relevant documents in the prompt of the LLM to answer the question.

Run this test with:

go test -v -run TestChat_rag ./...
Evaluator tests

This test uses the concept of LLM-as-a-judge to evaluate the accuracy of the answer. It creates an evaluator, using another LLM, maybe with a more specialised, different model, to evaluate the accuracy of the answer. For that, it uses a strict system message and a user message that forces the LLM to return a JSON object with the following fields:

  • "provided_answer": the answer to the question
  • "is_correct": true if the answer is correct, false otherwise
  • "reasoning": the reasoning behind the answer The response should be a valid JSON object.

Run this test with:

go test -v -run TestChat_usingEvaluator ./...

🧠 Inference Options

By default, this project uses Docker Model Runner to handle LLM inference locally — no internet connection or external API key is required.

If you’d prefer to use OpenAI instead:

  1. Create a secret.openai-api-key file with your OpenAI API key:

    sk-...
    
  2. Restart the project with the OpenAI configuration:

    docker compose down -v
    docker compose -f compose.yaml -f compose.openai.yaml up
    

❓ What Can It Do?

Ask natural language questions and let Langchaingo + DuckDuckGo Search provide intelligent, real-time answers:

  • “Does Langchaingo support the Model Context Protocol?”
  • “What is the Brave Search API?”
  • “Give me examples of Langchaingo integrations.”

The application uses:

  • A MCP-compatible gateway to route queries to DuckDuckGo Search
  • Langchaingo’s LLM client to embed results into answers
  • An MCP client to call tools, using the Model Context Protocol's Go SDK.

To customize the question asked to the agent, edit the QUESTION environment variable in compose.yaml.

🧱 Project Structure

File/Folder Purpose
compose.yaml launches the DuckDuckGo MCP gateway and app
Dockerfile Builds the Go container
main.go Configures the ChatClient with MCP and runs it
tool_duckduck.go Implements the DuckDuckGo tool

🔧 Architecture Overview


flowchart TD
    A[($QUESTION)] --> B[Go App]
    B --> C[Langchaingo ChatClient]
    C -->|uses| M[MCP Client]
    M -->|uses| D[MCP Tool Callback]
    D -->|queries| E[Docker MCP Gateway]
    E -->|calls| F[DuckDuckGo Search API]
    F --> E --> D --> C
    C -->|LLM| H[(Docker Model Runner)]
    H --> C
    C --> G[Final Answer]

  • The application loads a question via the QUESTION environment variable.
  • MCP is used as a tool in the LLM pipeline.
  • The response is enriched with real-time DuckDuckGo Search results.

📎 Credits

Documentation

The Go Gopher

There is no documentation for this package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL