Documentation
¶
Overview ¶
Demonstrates a multi-step LLM pipeline with Go-side gating. This shape pops up constantly in production agent flows:
- generate — ask the model for an initial draft
- evaluate — ask the model to score the draft 1-10
- gate (Go) — parse the score; decide whether to refine
- refine — if needed, ask the model to tighten the draft
All four calls share one opencode session so later prompts see the earlier turns as conversation history. That keeps token usage low and ensures the model is evaluating/refining the same draft it wrote a moment ago.
go run ./examples/pipeline
Click to show internal directories.
Click to hide internal directories.