conversation

package
v1.17.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 27, 2026 License: Apache-2.0 Imports: 13 Imported by: 18

README

Conversation

Conversations provide a common way to converse with different LLM providers.

Documentation

Overview

Copyright 2024 The Dapr Authors Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Copyright 2024 The Dapr Authors Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Copyright 2025 The Dapr Authors Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Copyright 2024 The Dapr Authors Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Index

Constants

View Source
const (
	DefaultOpenAIModel      = "gpt-5-nano"   // Enable GPT-5 (Preview) for all clients
	DefaultAzureOpenAIModel = "gpt-4.1-nano" // Default Azure OpenAI model
	DefaultAnthropicModel   = "claude-sonnet-4-20250514"
	DefaultGoogleAIModel    = "gemini-2.5-flash-lite"
	DefaultMistralModel     = "open-mistral-7b"
	DefaultHuggingFaceModel = "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B"
	DefaultOllamaModel      = "llama3.2:latest"
)

Exported default model constants for consumers of the conversation package. These are used as fallbacks when env vars and metadata are not set.

Variables

This section is empty.

Functions

func BuildHTTPClient added in v1.17.0

func BuildHTTPClient() *http.Client

BuildHTTPClient creates an HTTP client with timeout set to 0 to rely on context deadlines. The context deadline will be respected via http.NewRequestWithContext within Langchain. This allows resiliency policy timeouts from runtime to propagate through to the HTTP client for the LLM provider.

func BuildOpenAIClientOptions added in v1.17.0

func BuildOpenAIClientOptions(model, key, endpoint string) []openai.Option

BuildOpenAIClientOptions is a helper function that is used by conversation components that use the OpenAI client under the hood. HTTP client timeout is set from resiliency policy configuration.

func CacheResponses added in v1.17.0

func CacheResponses(ctx context.Context, ttl *time.Duration, model llms.Model) (llms.Model, error)

CacheResponses creates a response cache with a configured TTL. This caches the final LLM responses (outputs) based on the input messages and call options. When the same prompt with the same options is requested, the cached response is returned without making an API call to the LLM provider, reducing latency and cost.

func GetAnthropicModel added in v1.16.1

func GetAnthropicModel(metadataValue string) string

func GetAzureOpenAIModel added in v1.16.1

func GetAzureOpenAIModel(metadataValue string) string

func GetGoogleAIModel added in v1.16.1

func GetGoogleAIModel(metadataValue string) string

func GetHuggingFaceModel added in v1.16.1

func GetHuggingFaceModel(metadataValue string) string

func GetMistralModel added in v1.16.1

func GetMistralModel(metadataValue string) string

func GetOllamaModel added in v1.16.1

func GetOllamaModel(metadataValue string) string

func GetOpenAIModel added in v1.16.1

func GetOpenAIModel(metadataValue string) string

Example usage for model getters with metadata support: Pass metadataValue from your metadata file/struct, or "" if not set.

Types

type Choice added in v1.16.0

type Choice struct {
	FinishReason string  `json:"finishReason"`
	Index        int64   `json:"index"`
	Message      Message `json:"message"`
}

type CompletionTokensDetails added in v1.17.0

type CompletionTokensDetails struct {
	AcceptedPredictionTokens uint64 `json:"acceptedPredictionTokens"`
	AudioTokens              uint64 `json:"audioTokens"`
	ReasoningTokens          uint64 `json:"reasoningTokens"`
	RejectedPredictionTokens uint64 `json:"rejectedPredictionTokens"`
}

CompletionTokensDetails provides breakdown of completion tokens

type Conversation

type Conversation interface {
	metadata.ComponentWithMetadata

	Init(ctx context.Context, meta Metadata) error

	Converse(ctx context.Context, req *Request) (*Response, error)

	io.Closer
}

type LangchainMetadata

type LangchainMetadata struct {
	Key              string         `json:"key" mapstructure:"key"`
	Model            string         `json:"model" mapstructure:"model"`
	ResponseCacheTTL *time.Duration `json:"responseCacheTTL,omitempty" mapstructure:"responseCacheTTL" mapstructurealiases:"cacheTTL"`
	Endpoint         string         `json:"endpoint" mapstructure:"endpoint"`
}

LangchainMetadata is a common metadata structure for langchain supported implementations.

type Message added in v1.16.0

type Message struct {
	Content         string           `json:"content,omitempty"`
	ToolCallRequest *[]llms.ToolCall `json:"toolCallRequest,omitempty"`
}

Message represents the content of a choice where it can be a text message or a tool call.

type Metadata

type Metadata struct {
	metadata.Base `json:",inline"`
}

Metadata represents a set of conversation specific properties.

type PromptTokensDetails added in v1.17.0

type PromptTokensDetails struct {
	AudioTokens  uint64 `json:"audioTokens"`
	CachedTokens uint64 `json:"cachedTokens"`
}

PromptTokensDetails provides breakdown of prompt tokens

type Request added in v1.16.0

type Request struct {
	// Message can be user input prompt/instructions and/or tool call responses.
	Message     *[]llms.MessageContent
	Tools       *[]llms.Tool
	ToolChoice  *string
	Temperature float64 `json:"temperature"`

	// Metadata fields that are separate from the actual component metadata fields
	// that get passed to the LLM through the conversation.
	// https://github.com/openai/openai-go/blob/main/chatcompletion.go#L3010
	Metadata                   map[string]string `json:"metadata"`
	ResponseFormatAsJSONSchema map[string]any    `json:"responseFormatAsJsonSchema"`
	PromptCacheRetention       *time.Duration    `json:"promptCacheRetention,omitempty"`

	// TODO: rm these in future PR as they are not used
	Parameters          map[string]*anypb.Any `json:"parameters"`
	ConversationContext string                `json:"conversationContext"`
}

type Response added in v1.16.0

type Response struct {
	Outputs             []Result `json:"outputs"`
	Model               string   `json:"model"`
	ConversationContext string   `json:"conversationContext,omitempty"`
	Usage               *Usage   `json:"usage,omitempty"`
}

type Result added in v1.16.0

type Result struct {
	StopReason string   `json:"stopReason"`
	Choices    []Choice `json:"choices,omitempty"`
}

type Usage added in v1.17.0

type Usage struct {
	CompletionTokens        uint64                   `json:"completionTokens"`
	PromptTokens            uint64                   `json:"promptTokens"`
	TotalTokens             uint64                   `json:"totalTokens"`
	CompletionTokensDetails *CompletionTokensDetails `json:"completionTokensDetails,omitempty"`
	PromptTokensDetails     *PromptTokensDetails     `json:"promptTokensDetails,omitempty"`
}

Usage represents token usage statistics for a completion request

Directories

Path Synopsis
aws

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL