Documentation
¶
Overview ¶
Package openinference provides OpenInference semantic conventions for OpenTelemetry tracing.
Index ¶
- Constants
- func ChoiceTextAttribute(index int) string
- func EmbeddingTextAttribute(index int) string
- func EmbeddingVectorAttribute(index int) string
- func InputMessageAttribute(index int, suffix string) string
- func InputMessageContentAttribute(messageIndex, contentIndex int, suffix string) string
- func OutputMessageAttribute(index int, suffix string) string
- func OutputMessageToolCallAttribute(messageIndex, toolCallIndex int, suffix string) string
- func PromptTextAttribute(index int) string
- func RequireAttributesEqual(t *testing.T, expected, actual []attribute.KeyValue)
- func RequireEventsEqual(t *testing.T, expected, actual []trace.Event)
- type TraceConfig
Constants ¶
const ( // EnvHideLLMInvocationParameters is the environment variable for TraceConfig.HideLLMInvocationParameters. EnvHideLLMInvocationParameters = "OPENINFERENCE_HIDE_LLM_INVOCATION_PARAMETERS" // EnvHideInputs is the environment variable for TraceConfig.HideInputs. EnvHideInputs = "OPENINFERENCE_HIDE_INPUTS" // EnvHideOutputs is the environment variable for TraceConfig.HideOutputs. EnvHideOutputs = "OPENINFERENCE_HIDE_OUTPUTS" // EnvHideInputMessages is the environment variable for TraceConfig.HideInputMessages. EnvHideInputMessages = "OPENINFERENCE_HIDE_INPUT_MESSAGES" // EnvHideOutputMessages is the environment variable for TraceConfig.HideOutputMessages. EnvHideOutputMessages = "OPENINFERENCE_HIDE_OUTPUT_MESSAGES" // EnvHideInputImages is the environment variable for TraceConfig.HideInputImages. EnvHideInputImages = "OPENINFERENCE_HIDE_INPUT_IMAGES" // EnvHideInputText is the environment variable for TraceConfig.HideInputText. EnvHideInputText = "OPENINFERENCE_HIDE_INPUT_TEXT" // EnvHideOutputText is the environment variable for TraceConfig.HideOutputText. EnvHideOutputText = "OPENINFERENCE_HIDE_OUTPUT_TEXT" // EnvHideEmbeddingsText is the environment variable for TraceConfig.HideEmbeddingsText. EnvHideEmbeddingsText = "OPENINFERENCE_HIDE_EMBEDDINGS_TEXT" // EnvHideEmbeddingsVectors is the environment variable for TraceConfig.HideEmbeddingsVectors. EnvHideEmbeddingsVectors = "OPENINFERENCE_HIDE_EMBEDDINGS_VECTORS" // EnvBase64ImageMaxLength is the environment variable for TraceConfig.Base64ImageMaxLength. EnvBase64ImageMaxLength = "OPENINFERENCE_BASE64_IMAGE_MAX_LENGTH" // EnvHidePrompts is the environment variable for TraceConfig.HidePrompts. // Hides LLM prompts (completions API). // See: https://github.com/Arize-ai/openinference/blob/main/spec/configuration.md EnvHidePrompts = "OPENINFERENCE_HIDE_PROMPTS" // EnvHideChoices is the environment variable for TraceConfig.HideChoices. // Hides LLM choices (completions API outputs). // See: https://github.com/Arize-ai/openinference/blob/main/spec/configuration.md EnvHideChoices = "OPENINFERENCE_HIDE_CHOICES" )
Environment variable names for trace configuration following Python OpenInference conventions. These environment variables control the privacy and observability settings for OpenInference tracing. See: https://github.com/Arize-ai/openinference/blob/main/spec/embedding_spans.md
const ( // SpanKind identifies the type of operation (required for all OpenInference spans). SpanKind = "openinference.span.kind" // SpanKindLLM indicates a Large Language Model operation. SpanKindLLM = "LLM" // SpanKindEmbedding indicates an Embedding operation. SpanKindEmbedding = "EMBEDDING" )
OpenInference Span Kind constants.
These constants define the type of operation represented by a span. Reference: https://github.com/Arize-ai/openinference/blob/main/spec/semantic_conventions.md
const ( // LLMSystem identifies the AI system/product (e.g., "openai"). LLMSystem = "llm.system" // LLMModelName specifies the model name (e.g., "gpt-4", "gpt-3.5-turbo"). LLMModelName = "llm.model_name" // LLMInvocationParameters contains the invocation parameters as JSON string. LLMInvocationParameters = "llm.invocation_parameters" )
LLM Operation constants.
These constants define attributes for Large Language Model operations. following OpenInference semantic conventions. Reference: https://github.com/Arize-ai/openinference/blob/main/spec/semantic_conventions.md#llm-spans
const ( // InputValue contains the input data as a string (typically JSON). InputValue = "input.value" // InputMimeType specifies the MIME type of the input data. InputMimeType = "input.mime_type" // OutputValue contains the output data as a string (typically JSON). OutputValue = "output.value" // OutputMimeType specifies the MIME type of the output data. OutputMimeType = "output.mime_type" // MimeTypeJSON for JSON content. MimeTypeJSON = "application/json" )
Input/Output constants.
These constants define attributes for capturing input and output data. Reference: https://github.com/Arize-ai/openinference/blob/main/spec/semantic_conventions.md#inputoutput
const ( // LLMPrompts prefix for prompt attributes in completions API. // Usage: llm.prompts.{index}.prompt.text // Prompts provided to a completions API, indexed starting from 0. LLMPrompts = "llm.prompts" // LLMChoices prefix for completion choice attributes in completions API. // Usage: llm.choices.{index}.completion.text // Text choices returned from a completions API, indexed starting from 0. LLMChoices = "llm.choices" )
Completions API constants (Legacy Text Completion).
These constants define attributes for the legacy completions API, distinct from chat completions. They use indexed attribute format with discriminated union structure for future expansion. Reference: https://github.com/Arize-ai/openinference/blob/main/spec/semantic_conventions.md#completions-api-legacy-text-completion
const ( // LLMInputMessages prefix for input message attributes. // Usage: llm.input_messages.{index}.message.role, llm.input_messages.{index}.message.content. LLMInputMessages = "llm.input_messages" // LLMOutputMessages prefix for output message attributes. // Usage: llm.output_messages.{index}.message.role, llm.output_messages.{index}.message.content. LLMOutputMessages = "llm.output_messages" // MessageRole suffix for message role (e.g., "user", "assistant", "system"). MessageRole = "message.role" // MessageContent suffix for message content. MessageContent = "message.content" )
LLM Message constants.
These constants define attributes for LLM input and output messages using. flattened attribute format. Messages are indexed starting from 0. Reference: https://github.com/Arize-ai/openinference/blob/main/spec/semantic_conventions.md#llm-spans
const ( // LLMTokenCountPrompt contains the number of tokens in the prompt. LLMTokenCountPrompt = "llm.token_count.prompt" // #nosec G101 // LLMTokenCountCompletion contains the number of tokens in the completion. LLMTokenCountCompletion = "llm.token_count.completion" // #nosec G101 // LLMTokenCountTotal contains the total number of tokens. LLMTokenCountTotal = "llm.token_count.total" // #nosec G101 )
Token Count constants.
These constants define attributes for token usage tracking. Reference: https://github.com/Arize-ai/openinference/blob/main/spec/semantic_conventions.md#llm-spans
const ( // LLMTools contains the list of available tools as JSON. // Format: llm.tools.{index}.tool.json_schema. LLMTools = "llm.tools" // MessageToolCalls prefix for tool calls in messages. // Format: message.tool_calls.{index}.tool_call.{attribute}. MessageToolCalls = "message.tool_calls" // ToolCallID suffix for tool call ID. ToolCallID = "tool_call.id" // ToolCallFunctionName suffix for function name in a tool call. ToolCallFunctionName = "tool_call.function.name" // ToolCallFunctionArguments suffix for function arguments as JSON string. ToolCallFunctionArguments = "tool_call.function.arguments" )
Tool Call constants.
These constants define attributes for function/tool calling in LLM operations. Used when LLM responses include tool calls. Reference: Python OpenAI instrumentation (not in core spec).
const ( // LLMTokenCountPromptCacheHit represents the number of prompt tokens successfully. // retrieved from cache (cache hits). This enables tracking of cache efficiency // and cost savings from cached prompts. LLMTokenCountPromptCacheHit = "llm.token_count.prompt_details.cache_read" // #nosec G101 // LLMTokenCountPromptAudio represents the number of audio tokens in the prompt. // Used for multimodal models that support audio input. LLMTokenCountPromptAudio = "llm.token_count.prompt_details.audio" // #nosec G101 // LLMTokenCountCompletionReasoning represents the number of tokens used for // reasoning or chain-of-thought processes in the completion. This helps track // the computational cost of complex reasoning tasks. LLMTokenCountCompletionReasoning = "llm.token_count.completion_details.reasoning" // #nosec G101 // LLMTokenCountCompletionAudio represents the number of audio tokens in the. // completion. Used for models that generate audio output. LLMTokenCountCompletionAudio = "llm.token_count.completion_details.audio" // #nosec G101 )
Extended Token Count constants.
These constants define additional token count attributes for detailed usage tracking. They provide granular information about token consumption for cost analysis and performance monitoring. Reference: OpenInference specification and Python OpenAI instrumentation.
const ( // EmbeddingModelName specifies the name of the embedding model. // Example: "text-embedding-3-small" EmbeddingModelName = "embedding.model_name" // EmbeddingInvocationParameters contains the invocation parameters as JSON string. // This includes parameters sent to the model excluding input. // Example: {"model": "text-embedding-3-small", "encoding_format": "float"} EmbeddingInvocationParameters = "embedding.invocation_parameters" // EmbeddingEmbeddings is the prefix for embedding data attributes in batch operations. // Forms the base for nested attributes like embedding.embeddings.{index}.embedding.text // and embedding.embeddings.{index}.embedding.vector. EmbeddingEmbeddings = "embedding.embeddings" )
Embedding Operation constants.
These constants define attributes for Embedding operations. Reference: https://github.com/Arize-ai/openinference/blob/main/spec/embedding_spans.md
const (
// LLMSystemOpenAI for OpenAI systems.
LLMSystemOpenAI = "openai"
)
LLMSystem Values.
const RedactedValue = "__REDACTED__"
RedactedValue is the value used when content is hidden for privacy.
Variables ¶
This section is empty.
Functions ¶
func ChoiceTextAttribute ¶ added in v0.4.0
ChoiceTextAttribute creates an attribute key for completion choice text. Format: llm.choices.{index}.completion.text
Example: llm.choices.0.completion.text, llm.choices.1.completion.text
The nested structure (.completion.text) uses a discriminated union pattern that mirrors the prompt structure, allowing for future expansion with additional choice metadata.
func EmbeddingTextAttribute ¶ added in v0.4.0
EmbeddingTextAttribute creates an attribute key for embedding input text. Format: embedding.embeddings.{index}.embedding.text
Text attributes are populated ONLY when the input is already text (strings). These attributes are recorded during the request phase to ensure availability even on errors.
Token IDs (pre-tokenized integer arrays) are NOT decoded to text because:
- Cross-provider incompatibility: Same token IDs represent different text across tokenizers (OpenAI uses cl100k_base, Ollama uses BERT/WordPiece/etc.)
- Runtime impossibility: OpenAI-compatible APIs may serve any model with unknown tokenizers
- Heavy dependencies: Supporting all tokenizers would require libraries beyond tiktoken
func EmbeddingVectorAttribute ¶ added in v0.4.0
EmbeddingVectorAttribute creates an attribute key for embedding output vector. Format: embedding.embeddings.{index}.embedding.vector
Vector attributes MUST contain float arrays, regardless of the API response format:
- Float response format: Store vectors directly as float arrays
- Base64 response format: MUST decode base64-encoded strings to float arrays before recording (Base64 encoding is ~25% more compact in transmission but must be decoded for consistency) Example: "AACAPwAAAEA=" → [1.5, 2.0]
func InputMessageAttribute ¶
InputMessageAttribute creates an attribute key for input messages.
func InputMessageContentAttribute ¶
InputMessageContentAttribute creates an attribute key for input message content.
func OutputMessageAttribute ¶
OutputMessageAttribute creates an attribute key for output messages.
func OutputMessageToolCallAttribute ¶
OutputMessageToolCallAttribute creates an attribute key for a tool call.
func PromptTextAttribute ¶ added in v0.4.0
PromptTextAttribute creates an attribute key for prompt text in completions API. Format: llm.prompts.{index}.prompt.text
Example: llm.prompts.0.prompt.text, llm.prompts.1.prompt.text
The nested structure (.prompt.text) uses a discriminated union pattern that mirrors llm.input_messages and llm.output_messages, allowing for future expansion.
func RequireAttributesEqual ¶
RequireAttributesEqual compensates for Go not having a reliable JSON field marshaling order.
Types ¶
type TraceConfig ¶
type TraceConfig struct {
// HideLLMInvocationParameters controls whether LLM invocation parameters are hidden.
// This is independent of HideInputs.
HideLLMInvocationParameters bool
// HideInputs controls whether input values and messages are hidden.
// When true, hides both input.value and all input messages.
HideInputs bool
// HideOutputs controls whether output values and messages are hidden.
// When true, hides both output.value and all output messages.
HideOutputs bool
// HideInputMessages controls whether all input messages are hidden.
// Input messages are hidden if either HideInputs OR HideInputMessages is true.
HideInputMessages bool
// HideOutputMessages controls whether all output messages are hidden.
// Output messages are hidden if either HideOutputs OR HideOutputMessages is true.
HideOutputMessages bool
// HideInputImages controls whether images from input messages are hidden.
// Only applies when input messages are not already hidden.
HideInputImages bool
// HideInputText controls whether text from input messages is hidden.
// Only applies when input messages are not already hidden.
HideInputText bool
// HideOutputText controls whether text from output messages is hidden.
// Only applies when output messages are not already hidden.
HideOutputText bool
// HideEmbeddingsText controls whether embedding text is hidden.
// Maps to OPENINFERENCE_HIDE_EMBEDDINGS_TEXT environment variable.
// When true, embedding.embeddings.N.embedding.text attributes contain "__REDACTED__".
HideEmbeddingsText bool
// HideEmbeddingsVectors controls whether embedding vectors are hidden.
// Maps to OPENINFERENCE_HIDE_EMBEDDINGS_VECTORS environment variable.
// When true, embedding.embeddings.N.embedding.vector attributes contain "__REDACTED__".
HideEmbeddingsVectors bool
// Base64ImageMaxLength limits the characters of a base64 encoding of an image.
Base64ImageMaxLength int
// HidePrompts controls whether LLM prompts are hidden.
// Maps to OPENINFERENCE_HIDE_PROMPTS environment variable.
// Only applies to completions API (not chat completions).
// When true, llm.prompts.N.prompt.text attributes contain "__REDACTED__".
HidePrompts bool
// HideChoices controls whether LLM choices are hidden.
// Maps to OPENINFERENCE_HIDE_CHOICES environment variable.
// Only applies to completions API outputs (not chat completions).
// When true, llm.choices.N.completion.text attributes contain "__REDACTED__".
HideChoices bool
}
TraceConfig helps you modify the observability level of your tracing. For instance, you may want to keep sensitive information from being logged. for security reasons, or you may want to limit the size of the base64. encoded images to reduce payloads.
Use NewTraceConfig to create this from defaults or NewTraceConfigFromEnv to prioritize environment variables.
This implementation follows the OpenInference configuration specification: https://github.com/Arize-ai/openinference/blob/main/spec/embedding_spans.md
func NewTraceConfig ¶
func NewTraceConfig() *TraceConfig
NewTraceConfig creates a new TraceConfig with default values.
See: https://github.com/Arize-ai/openinference/blob/main/spec/embedding_spans.md
func NewTraceConfigFromEnv ¶
func NewTraceConfigFromEnv() *TraceConfig
NewTraceConfigFromEnv creates a new TraceConfig with values from environment variables or their corresponding defaults.
See: https://github.com/Arize-ai/openinference/blob/main/spec/embedding_spans.md