llm

package
v1.40.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 20, 2026 License: Apache-2.0 Imports: 11 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func BuildSQLPrompt

func BuildSQLPrompt(
	tables []TableInfo,
	now time.Time,
	columnHints string,
	extraContext string,
) string

BuildSQLPrompt creates a system prompt that instructs the LLM to translate a natural-language question into a single SELECT statement. The prompt includes the current date, the full schema as DDL, and few-shot examples. If extraContext is non-empty, it's appended at the end.

func BuildSummaryPrompt

func BuildSummaryPrompt(
	question, sql, resultsTable string,
	now time.Time,
	extraContext string,
) string

BuildSummaryPrompt creates a system prompt for the second stage: turning SQL results into a concise natural-language answer. If extraContext is non-empty, it's appended at the end.

func BuildSystemPrompt

func BuildSystemPrompt(
	tables []TableInfo,
	dataSummary string,
	now time.Time,
	extraContext string,
) string

BuildSystemPrompt assembles the old single-stage system prompt, used as a fallback when the two-stage pipeline fails. If extraContext is non-empty, it's appended at the end.

func ExtractSQL

func ExtractSQL(raw string) string

ExtractSQL pulls the SQL statement from the LLM's response, handling both bare SQL and fenced code blocks. Returns the trimmed SQL string.

func FormatResultsTable

func FormatResultsTable(columns []string, rows [][]string) string

FormatResultsTable renders query results as a pipe-delimited text table, compact enough for an LLM context window.

func FormatSQL

func FormatSQL(sql string, maxWidth int) string

FormatSQL pretty-prints a SQL SELECT statement for human reading. It tokenizes the input, uppercases keywords, aligns clauses, indents continuation lines, and wraps SELECT column lists one-per-line when there are multiple columns. maxWidth controls soft line wrapping; pass 0 to disable wrapping.

Types

type Client

type Client struct {
	// contains filtered or unexported fields
}

Client talks to an OpenAI-compatible API endpoint (Ollama, llama.cpp, LM Studio, etc.) for local LLM inference.

func NewClient

func NewClient(baseURL, model string, timeout time.Duration) *Client

NewClient creates an LLM client targeting the given OpenAI-compatible endpoint and model. The timeout controls quick operations like ping and model listing.

func (*Client) BaseURL

func (c *Client) BaseURL() string

BaseURL returns the configured base URL.

func (*Client) ChatComplete

func (c *Client) ChatComplete(
	ctx context.Context,
	messages []Message,
) (string, error)

ChatComplete sends a non-streaming chat completion request and returns the full response content. Used for structured output like SQL generation where the caller needs the complete result before proceeding.

func (*Client) ChatStream

func (c *Client) ChatStream(
	ctx context.Context,
	messages []Message,
) (<-chan StreamChunk, error)

ChatStream sends a chat completion request and returns a channel that emits streamed response chunks. The channel closes when the response completes or the context is cancelled. Callers must drain the channel.

func (*Client) ListModels

func (c *Client) ListModels(ctx context.Context) ([]string, error)

ListModels fetches the available model IDs from the inference server.

func (*Client) Model

func (c *Client) Model() string

Model returns the configured model name.

func (*Client) Ping

func (c *Client) Ping(ctx context.Context) error

Ping checks whether the API is reachable and the configured model is available. Returns a user-friendly error if not.

func (*Client) PullModel

func (c *Client) PullModel(ctx context.Context, model string) (*PullScanner, error)

PullModel initiates a model pull via the Ollama native API. The base URL is assumed to be the OpenAI-compatible endpoint (e.g. "http://localhost:11434/v1"); this method strips "/v1" to reach the Ollama native API at "/api/pull".

func (*Client) SetModel

func (c *Client) SetModel(model string)

SetModel switches the active model. The caller is responsible for verifying the model exists (e.g. via ListModels).

func (*Client) Timeout added in v1.32.0

func (c *Client) Timeout() time.Duration

Timeout returns the configured timeout for quick operations.

type ColumnInfo

type ColumnInfo struct {
	Name    string
	Type    string
	NotNull bool
	PK      bool
}

ColumnInfo describes a single column in a table.

type Message

type Message struct {
	Role    string `json:"role"`
	Content string `json:"content"`
}

Message represents a single turn in the conversation.

type PullChunk

type PullChunk struct {
	Status    string `json:"status"`
	Digest    string `json:"digest"`
	Total     int64  `json:"total"`
	Completed int64  `json:"completed"`
	Error     string `json:"error"` // Ollama streams errors in this field
}

PullChunk is a single progress update from the Ollama pull API.

type PullScanner

type PullScanner struct {
	// contains filtered or unexported fields
}

PullScanner wraps the streaming response from the Ollama pull API.

func (*PullScanner) Next

func (ps *PullScanner) Next() (*PullChunk, error)

Next returns the next progress chunk, or nil at EOF.

type StreamChunk

type StreamChunk struct {
	Content string
	Done    bool
	Err     error
}

StreamChunk is a single piece of a streaming chat response.

type TableInfo

type TableInfo struct {
	Name    string
	Columns []ColumnInfo
}

TableInfo describes a database table for context injection.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL