Documentation
¶
Overview ¶
Package sampling provides options for configuring LLM sampling requests.
Package sampling provides types and options for LLM sampling via the MCP protocol.
Index ¶
- func ApplyOptions(opts []MessageOption) (*messageConfig, error)
- type Message
- type MessageOption
- func WithCostPriority(priority float64) MessageOption
- func WithIncludeContext(context string) MessageOption
- func WithIntelligencePriority(priority float64) MessageOption
- func WithMaxTokens(maxTokens int) MessageOption
- func WithMetadata(metadata any) MessageOption
- func WithModelHints(hints ...string) MessageOption
- func WithModelPreferences(prefs *mcp.ModelPreferences) MessageOption
- func WithSpeedPriority(priority float64) MessageOption
- func WithStopSequences(sequences ...string) MessageOption
- func WithSystemPrompt(prompt string) MessageOption
- func WithTemperature(temp float64) MessageOption
- type MessageResult
- type TextContent
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func ApplyOptions ¶
func ApplyOptions(opts []MessageOption) (*messageConfig, error)
ApplyOptions applies all the given options to create a messageConfig.
Types ¶
type MessageOption ¶
type MessageOption func(*messageConfig) error
MessageOption is a functional option for configuring CreateMessage requests.
func WithCostPriority ¶
func WithCostPriority(priority float64) MessageOption
WithCostPriority sets how much to prioritize cost when selecting a model. Value must be between 0.0 (cost not important) and 1.0 (cost most important).
func WithIncludeContext ¶
func WithIncludeContext(context string) MessageOption
WithIncludeContext requests to include context from one or more MCP servers. Valid values are "none", "thisServer", or "allServers". The client may ignore this request.
func WithIntelligencePriority ¶
func WithIntelligencePriority(priority float64) MessageOption
WithIntelligencePriority sets how much to prioritize intelligence when selecting a model. Value must be between 0.0 (intelligence not important) and 1.0 (intelligence most important).
func WithMaxTokens ¶
func WithMaxTokens(maxTokens int) MessageOption
WithMaxTokens sets the maximum number of tokens to generate. If not specified, defaults to 1000.
func WithMetadata ¶
func WithMetadata(metadata any) MessageOption
WithMetadata sets provider-specific metadata to pass through to the LLM. The format of this metadata is provider-specific.
func WithModelHints ¶
func WithModelHints(hints ...string) MessageOption
WithModelHints provides hints for model selection. Hints are treated as substrings of model names. For example, "claude-3-5-sonnet" matches "claude-3-5-sonnet-20241022".
func WithModelPreferences ¶
func WithModelPreferences(prefs *mcp.ModelPreferences) MessageOption
WithModelPreferences sets the complete model preferences for model selection. Use this for full control, or use the individual priority functions below.
func WithSpeedPriority ¶
func WithSpeedPriority(priority float64) MessageOption
WithSpeedPriority sets how much to prioritize speed (latency) when selecting a model. Value must be between 0.0 (speed not important) and 1.0 (speed most important).
func WithStopSequences ¶
func WithStopSequences(sequences ...string) MessageOption
WithStopSequences sets sequences where the LLM should stop generating.
func WithSystemPrompt ¶
func WithSystemPrompt(prompt string) MessageOption
WithSystemPrompt sets an optional system prompt for the LLM. The client may modify or omit this prompt.
func WithTemperature ¶
func WithTemperature(temp float64) MessageOption
WithTemperature sets the sampling temperature. Valid range is 0.0 to 2.0. Higher values make output more random.
type MessageResult ¶
type MessageResult struct {
Role string
Content TextContent
}
MessageResult represents the result of an LLM sampling request.
type TextContent ¶
type TextContent struct {
Text string
}
TextContent represents text content in a message.