processing

package
v0.0.22 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 2, 2025 License: Apache-2.0 Imports: 7 Imported by: 0

Documentation

Overview

Package processing provides request processing and response formatting for LLM interactions.

Package processing provides request processing and response formatting for LLM interactions. It handles template-based request transformation, LLM communication, and response formatting.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Message

type Message struct {
	Role    string `json:"role"`    // Role of the message sender (e.g., "user", "assistant")
	Content string `json:"content"` // The actual message content
}

Message represents a single message in a conversation. This follows the standard chat format used by most LLM providers, where each message has a role (e.g., "user", "assistant", "system") and content (the actual message text).

type Processor

type Processor struct {
	// contains filtered or unexported fields
}

Processor handles request processing and response formatting for LLM interactions. It uses Go templates to transform incoming requests into LLM-compatible formats, communicates with the LLM, and formats the responses according to configuration.

Key features: - Template-based request transformation - Configurable response formatting - Support for both simple and chat completions - System prompt management

The Processor is designed to be reusable across different request types while maintaining consistent formatting and error handling.

func NewProcessor

func NewProcessor(cfg *config.ProcessingConfig, llm gollm.LLM) (*Processor, error)

NewProcessor creates a new processor instance with the given configuration and LLM. It validates the configuration and pre-compiles all templates for efficiency.

Parameters: - cfg: Processing configuration including templates and formatting options - llm: LLM instance to use for text generation

Returns: - A new Processor instance and nil error if successful - nil and error if configuration is invalid or template compilation fails

The processor will fail fast if any templates are invalid, preventing runtime errors.

func (*Processor) ProcessRequest

func (p *Processor) ProcessRequest(ctx context.Context, req *Request) (*Response, error)

ProcessRequest handles the end-to-end processing of a request: 1. Validates the request 2. Selects and executes the appropriate template 3. Creates an LLM prompt with system context 4. Sends the request to the LLM 5. Formats the response according to configuration

Parameters: - ctx: Context for the request, used for cancellation and timeouts - req: The request to process, containing type and input data

Returns: - Formatted response and nil error if successful - nil and error if any step fails

The processor will use the "default" template if no matching template is found for the request type.

func (*Processor) SetDefaultPrompt

func (p *Processor) SetDefaultPrompt(prompt string)

SetDefaultPrompt sets the system prompt to be used for all requests. This prompt provides context and instructions to the LLM.

type Request

type Request struct {
	// Type indicates the type of request (e.g., "completion", "chat", "function")
	Type     string    `json:"type"`               // Type of request (e.g., "default", "chat")
	Input    string    `json:"input"`              // Used for simple completion requests
	Messages []Message `json:"messages,omitempty"` // Used for chat completion requests
	// FunctionDescription is used for function-calling requests
	FunctionDescription string `json:"function_description,omitempty"`
}

Request represents an incoming request to the LLM service. It supports two main types of requests: 1. Simple completion: Using the Input field with a default template 2. Chat completion: Using the Messages field with a chat template

The Type field determines which template is used to format the request. This allows for flexible request handling while maintaining a consistent interface with the LLM.

type Response

type Response struct {
	// Content is the processed response content
	Content string `json:"content"` // The processed response content
	// Error holds any error information
	Error string `json:"error,omitempty"`
}

Response represents the processed output from the LLM. It contains the formatted content after applying any configured transformations (e.g., JSON cleaning, whitespace trimming, length limits).

Future extensions might include: - Metadata about the processing (e.g., truncation info) - Multiple response formats (e.g., text, structured data) - Usage statistics (tokens, processing time)

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL