Documentation
¶
Index ¶
- func DailyPrompt(observations []string, date, guidelines string, factPaths ...string) string
- func DiscussionFactsPrompt(title, summary, transcript, guidelines, annotations string) string
- func MonthlyPrompt(weeklySummaries []string, month, guidelines string) string
- func WeeklyPrompt(dailySummaries []string, weekID, guidelines string) string
- func WriteGuidelines(sb *strings.Builder, guidelines, stage string)
- type Backend
- type Claude
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func DailyPrompt ¶
DailyPrompt builds a prompt for distilling observations into a daily memory summary. If guidelines is non-empty, it is prepended as team-specific distillation preferences. Optional factPaths are relative file paths to fact JSONL files (discussion, github, etc.) that the AI coworker should read and incorporate into the summary.
func DiscussionFactsPrompt ¶
DiscussionFactsPrompt builds a prompt for extracting structured facts from a discussion. The LLM extracts decisions, learnings, open questions, action items, and key context as JSONL output (one JSON object per line). When annotations is non-empty, server-extracted annotations are included as ground truth.
func MonthlyPrompt ¶
MonthlyPrompt builds a prompt for synthesizing weekly summaries into a monthly memory.
func WeeklyPrompt ¶
WeeklyPrompt builds a prompt for synthesizing daily summaries into a weekly memory.
func WriteGuidelines ¶ added in v0.6.0
WriteGuidelines wraps team guidelines in a <team-guidelines> tag and tells the LLM which pipeline stage it's in. The stage identifier lets the guidelines reference optional per-stage override files via progressive disclosure (e.g., "if extracting discussions, read EXTRACT-discussions.md"). The LLM MAY use the Read tool to access files in memory/guidance/.
Types ¶
type Backend ¶
type Backend interface {
// Name returns the backend identifier (e.g., "claude").
Name() string
// Available checks if the CLI exists in PATH.
Available() bool
// Run sends a prompt and returns the text output.
Run(ctx context.Context, prompt string) (string, error)
}
Backend represents an AI agent CLI that can process prompts.
type Claude ¶
type Claude struct {
// Timeout for the claude process. Zero means no timeout (uses ctx).
Timeout time.Duration
// WorkDir sets the working directory for the claude process.
// When set, relative file paths in prompts resolve from this directory.
WorkDir string
// Model overrides the default model (e.g., "sonnet" or "claude-sonnet-4-6").
// Empty string uses the claude CLI default.
Model string
}
Claude implements Backend using the claude CLI in pipe mode.