Documentation
¶
Overview ¶
Package prompt provides shared types and utilities for working with .prompt.yml files
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func GetAzureChatMessageRole ¶
func GetAzureChatMessageRole(role string) (azuremodels.ChatMessageRole, error)
GetAzureChatMessageRole converts a role string to azuremodels.ChatMessageRole
func TemplateString ¶
TemplateString templates a string with the given data using simple {{variable}} replacement
Types ¶
type Evaluator ¶
type Evaluator struct {
Name string `yaml:"name"`
String *StringEvaluator `yaml:"string,omitempty"`
LLM *LLMEvaluator `yaml:"llm,omitempty"`
Uses string `yaml:"uses,omitempty"`
}
Evaluator represents an evaluation method (only used by eval command)
type File ¶
type File struct {
Name string `yaml:"name"`
Description string `yaml:"description"`
Model string `yaml:"model"`
ModelParameters ModelParameters `yaml:"modelParameters"`
ResponseFormat *string `yaml:"responseFormat,omitempty"`
JsonSchema *JsonSchema `yaml:"jsonSchema,omitempty"`
Messages []Message `yaml:"messages"`
// TestData and Evaluators are only used by eval command
TestData []map[string]interface{} `yaml:"testData,omitempty"`
Evaluators []Evaluator `yaml:"evaluators,omitempty"`
}
File represents the structure of a .prompt.yml file
func LoadFromFile ¶
LoadFromFile loads and parses a prompt file from the given path
func (*File) BuildChatCompletionOptions ¶
func (f *File) BuildChatCompletionOptions(messages []azuremodels.ChatMessage) azuremodels.ChatCompletionOptions
BuildChatCompletionOptions creates a ChatCompletionOptions with the file's model and parameters
type JsonSchema ¶ added in v0.0.20
JsonSchema represents a JSON schema for structured responses
func (*JsonSchema) UnmarshalYAML ¶ added in v0.0.22
func (js *JsonSchema) UnmarshalYAML(node *yaml.Node) error
UnmarshalYAML implements custom YAML unmarshaling for JsonSchema Only supports JSON string format
type LLMEvaluator ¶
type LLMEvaluator struct {
ModelID string `yaml:"modelId"`
Prompt string `yaml:"prompt"`
Choices []Choice `yaml:"choices"`
SystemPrompt string `yaml:"systemPrompt,omitempty"`
}
LLMEvaluator represents LLM-based evaluation
type ModelParameters ¶
type ModelParameters struct {
MaxTokens *int `yaml:"maxTokens"`
Temperature *float64 `yaml:"temperature"`
TopP *float64 `yaml:"topP"`
}
ModelParameters represents model configuration parameters