Documentation
¶
Overview ¶
Package openinference provides OpenInference semantic conventions for OpenTelemetry tracing.
Index ¶
- Constants
- func InputMessageAttribute(index int, suffix string) string
- func InputMessageContentAttribute(messageIndex, contentIndex int, suffix string) string
- func OutputMessageAttribute(index int, suffix string) string
- func OutputMessageToolCallAttribute(messageIndex, toolCallIndex int, suffix string) string
- func RequireAttributesEqual(t *testing.T, expected, actual []attribute.KeyValue)
- func RequireEventsEqual(t *testing.T, expected, actual []trace.Event)
- type TraceConfig
Constants ¶
const ( // EnvHideLLMInvocationParameters is the environment variable for TraceConfig.HideLLMInvocationParameters. EnvHideLLMInvocationParameters = "OPENINFERENCE_HIDE_LLM_INVOCATION_PARAMETERS" // EnvHideInputs is the environment variable for TraceConfig.HideInputs. EnvHideInputs = "OPENINFERENCE_HIDE_INPUTS" // EnvHideOutputs is the environment variable for TraceConfig.HideOutputs. EnvHideOutputs = "OPENINFERENCE_HIDE_OUTPUTS" // EnvHideInputMessages is the environment variable for TraceConfig.HideInputMessages. EnvHideInputMessages = "OPENINFERENCE_HIDE_INPUT_MESSAGES" // EnvHideOutputMessages is the environment variable for TraceConfig.HideOutputMessages. EnvHideOutputMessages = "OPENINFERENCE_HIDE_OUTPUT_MESSAGES" // EnvHideInputImages is the environment variable for TraceConfig.HideInputImages. EnvHideInputImages = "OPENINFERENCE_HIDE_INPUT_IMAGES" // EnvHideInputText is the environment variable for TraceConfig.HideInputText. EnvHideInputText = "OPENINFERENCE_HIDE_INPUT_TEXT" // EnvHideOutputText is the environment variable for TraceConfig.HideOutputText. EnvHideOutputText = "OPENINFERENCE_HIDE_OUTPUT_TEXT" // EnvHideEmbeddingVectors is the environment variable for TraceConfig.HideEmbeddingVectors. EnvHideEmbeddingVectors = "OPENINFERENCE_HIDE_EMBEDDING_VECTORS" // EnvBase64ImageMaxLength is the environment variable for TraceConfig.Base64ImageMaxLength. EnvBase64ImageMaxLength = "OPENINFERENCE_BASE64_IMAGE_MAX_LENGTH" // EnvHidePrompts is the environment variable for TraceConfig.HidePrompts. EnvHidePrompts = "OPENINFERENCE_HIDE_PROMPTS" )
Environment variable names for trace configuration following Python OpenInference conventions. These environment variables control the privacy and observability settings for OpenInference tracing. See: https://github.com/Arize-ai/openinference/blob/main/spec/configuration.md
const ( // SpanKind identifies the type of operation (required for all OpenInference spans). SpanKind = "openinference.span.kind" // SpanKindLLM indicates a Large Language Model operation. SpanKindLLM = "LLM" )
OpenInference Span Kind constants.
These constants define the type of operation represented by a span. Reference: https://github.com/Arize-ai/openinference/blob/main/spec/semantic_conventions.md
const ( // LLMSystem identifies the AI system/product (e.g., "openai"). LLMSystem = "llm.system" // LLMModelName specifies the model name (e.g., "gpt-4", "gpt-3.5-turbo"). LLMModelName = "llm.model_name" // LLMInvocationParameters contains the invocation parameters as JSON string. LLMInvocationParameters = "llm.invocation_parameters" )
LLM Operation constants.
These constants define attributes for Large Language Model operations. following OpenInference semantic conventions. Reference: https://github.com/Arize-ai/openinference/blob/main/spec/semantic_conventions.md#llm-spans
const ( // InputValue contains the input data as a string (typically JSON). InputValue = "input.value" // InputMimeType specifies the MIME type of the input data. InputMimeType = "input.mime_type" // OutputValue contains the output data as a string (typically JSON). OutputValue = "output.value" // OutputMimeType specifies the MIME type of the output data. OutputMimeType = "output.mime_type" // MimeTypeJSON for JSON content. MimeTypeJSON = "application/json" )
Input/Output constants.
These constants define attributes for capturing input and output data. Reference: https://github.com/Arize-ai/openinference/blob/main/spec/semantic_conventions.md#inputoutput
const ( // LLMInputMessages prefix for input message attributes. // Usage: llm.input_messages.{index}.message.role, llm.input_messages.{index}.message.content. LLMInputMessages = "llm.input_messages" // LLMOutputMessages prefix for output message attributes. // Usage: llm.output_messages.{index}.message.role, llm.output_messages.{index}.message.content. LLMOutputMessages = "llm.output_messages" // MessageRole suffix for message role (e.g., "user", "assistant", "system"). MessageRole = "message.role" // MessageContent suffix for message content. MessageContent = "message.content" )
LLM Message constants.
These constants define attributes for LLM input and output messages using. flattened attribute format. Messages are indexed starting from 0. Reference: https://github.com/Arize-ai/openinference/blob/main/spec/semantic_conventions.md#llm-spans
const ( // LLMTokenCountPrompt contains the number of tokens in the prompt. LLMTokenCountPrompt = "llm.token_count.prompt" // #nosec G101 // LLMTokenCountCompletion contains the number of tokens in the completion. LLMTokenCountCompletion = "llm.token_count.completion" // #nosec G101 // LLMTokenCountTotal contains the total number of tokens. LLMTokenCountTotal = "llm.token_count.total" // #nosec G101 )
Token Count constants.
These constants define attributes for token usage tracking. Reference: https://github.com/Arize-ai/openinference/blob/main/spec/semantic_conventions.md#llm-spans
const ( // LLMTools contains the list of available tools as JSON. // Format: llm.tools.{index}.tool.json_schema. LLMTools = "llm.tools" // MessageToolCalls prefix for tool calls in messages. // Format: message.tool_calls.{index}.tool_call.{attribute}. MessageToolCalls = "message.tool_calls" // ToolCallID suffix for tool call ID. ToolCallID = "tool_call.id" // ToolCallFunctionName suffix for function name in a tool call. ToolCallFunctionName = "tool_call.function.name" // ToolCallFunctionArguments suffix for function arguments as JSON string. ToolCallFunctionArguments = "tool_call.function.arguments" )
Tool Call constants.
These constants define attributes for function/tool calling in LLM operations. Used when LLM responses include tool calls. Reference: Python OpenAI instrumentation (not in core spec).
const ( // LLMTokenCountPromptCacheHit represents the number of prompt tokens successfully. // retrieved from cache (cache hits). This enables tracking of cache efficiency // and cost savings from cached prompts. LLMTokenCountPromptCacheHit = "llm.token_count.prompt_details.cache_read" // #nosec G101 // LLMTokenCountPromptAudio represents the number of audio tokens in the prompt. // Used for multimodal models that support audio input. LLMTokenCountPromptAudio = "llm.token_count.prompt_details.audio" // #nosec G101 // LLMTokenCountCompletionReasoning represents the number of tokens used for // reasoning or chain-of-thought processes in the completion. This helps track // the computational cost of complex reasoning tasks. LLMTokenCountCompletionReasoning = "llm.token_count.completion_details.reasoning" // #nosec G101 // LLMTokenCountCompletionAudio represents the number of audio tokens in the. // completion. Used for models that generate audio output. LLMTokenCountCompletionAudio = "llm.token_count.completion_details.audio" // #nosec G101 )
Extended Token Count constants.
These constants define additional token count attributes for detailed usage tracking. They provide granular information about token consumption for cost analysis and performance monitoring. Reference: OpenInference specification and Python OpenAI instrumentation.
const (
// LLMSystemOpenAI for OpenAI systems.
LLMSystemOpenAI = "openai"
)
LLMSystem Values.
const RedactedValue = "__REDACTED__"
RedactedValue is the value used when content is hidden for privacy.
Variables ¶
This section is empty.
Functions ¶
func InputMessageAttribute ¶
InputMessageAttribute creates an attribute key for input messages.
func InputMessageContentAttribute ¶
InputMessageContentAttribute creates an attribute key for input message content.
func OutputMessageAttribute ¶
OutputMessageAttribute creates an attribute key for output messages.
func OutputMessageToolCallAttribute ¶
OutputMessageToolCallAttribute creates an attribute key for a tool call.
func RequireAttributesEqual ¶
RequireAttributesEqual compensates for Go not having a reliable JSON field marshaling order.
Types ¶
type TraceConfig ¶
type TraceConfig struct {
// HideLLMInvocationParameters controls whether LLM invocation parameters are hidden.
// This is independent of HideInputs.
HideLLMInvocationParameters bool
// HideInputs controls whether input values and messages are hidden.
// When true, hides both input.value and all input messages.
HideInputs bool
// HideOutputs controls whether output values and messages are hidden.
// When true, hides both output.value and all output messages.
HideOutputs bool
// HideInputMessages controls whether all input messages are hidden.
// Input messages are hidden if either HideInputs OR HideInputMessages is true.
HideInputMessages bool
// HideOutputMessages controls whether all output messages are hidden.
// Output messages are hidden if either HideOutputs OR HideOutputMessages is true.
HideOutputMessages bool
// HideInputImages controls whether images from input messages are hidden.
// Only applies when input messages are not already hidden.
HideInputImages bool
// HideInputText controls whether text from input messages is hidden.
// Only applies when input messages are not already hidden.
HideInputText bool
// HideOutputText controls whether text from output messages is hidden.
// Only applies when output messages are not already hidden.
HideOutputText bool
// HideEmbeddingVectors controls whether embedding vectors are hidden.
HideEmbeddingVectors bool
// Base64ImageMaxLength limits the characters of a base64 encoding of an image.
Base64ImageMaxLength int
// HidePrompts controls whether LLM prompts are hidden.
HidePrompts bool
}
TraceConfig helps you modify the observability level of your tracing. For instance, you may want to keep sensitive information from being logged. for security reasons, or you may want to limit the size of the base64. encoded images to reduce payloads.
Use NewTraceConfig to create this from defaults or NewTraceConfigFromEnv to prioritize environment variables.
This implementation follows the OpenInference configuration specification: https://github.com/Arize-ai/openinference/blob/main/spec/configuration.md
func NewTraceConfig ¶
func NewTraceConfig() *TraceConfig
NewTraceConfig creates a new TraceConfig with default values.
See: https://github.com/Arize-ai/openinference/blob/main/spec/configuration.md
func NewTraceConfigFromEnv ¶
func NewTraceConfigFromEnv() *TraceConfig
NewTraceConfigFromEnv creates a new TraceConfig with values from environment variables or their corresponding defaults.
See: https://github.com/Arize-ai/openinference/blob/main/spec/configuration.md