openai

package
v0.17.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 30, 2025 License: Apache-2.0 Imports: 10 Imported by: 0

Documentation

Overview

The openai package provides objects that conform to the OpenAI API specification. It can be used with OpenAI, as well as other services that conform to the OpenAI API.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type AssistantMessage

type AssistantMessage[T string | []AssistantMessageContentPart] struct {

	// The content of the message.
	Content T

	// An optional name for the participant.
	// Provides the model information to differentiate between participants of the same role.
	Name string

	// The refusal message generated by the model, if any.
	Refusal string

	// The tool calls generated by the model, such as function calls.
	ToolCalls []ToolCall

	// Data about a previous audio response from the model.
	Audio *AudioRef
}

An assistant message object, representing a message previously generated by the model.

func NewAssistantMessage

func NewAssistantMessage[T string | []AssistantMessageContentPart](content T) *AssistantMessage[T]

Creates a new assistant message object.

func NewAssistantMessageFromCompletionMessage added in v0.16.6

func NewAssistantMessageFromCompletionMessage(cm *CompletionMessage) *AssistantMessage[string]

Creates a new assistant message object from a completion message, so it can be used in a conversation.

func NewAssistantMessageFromParts added in v0.16.6

func NewAssistantMessageFromParts(parts ...AssistantMessageContentPart) *AssistantMessage[[]AssistantMessageContentPart]

Creates a new assistant message object from multiple content parts.

func (AssistantMessage[T]) MarshalJSON added in v0.16.6

func (m AssistantMessage[T]) MarshalJSON() ([]byte, error)

Implements the json.Marshaler interface to serialize the AssistantMessage object.

func (*AssistantMessage[T]) Role added in v0.16.6

func (m *AssistantMessage[T]) Role() string

The role of the author of this message, in this case "assistant".

type AssistantMessageContentPart added in v0.16.6

type AssistantMessageContentPart interface {
	ContentPart
	// contains filtered or unexported methods
}

An interface for an assistant message content part.

type Audio added in v0.16.6

type Audio struct {

	// The raw audio data.
	Data []byte `json:"data"`

	// The format of the audio data, such as "wav" or "mp3".
	// The format must be a valid audio format supported by the model.
	Format string `json:"format"`
}

An audio object, used to represent audio in a content part.

type AudioContentPart added in v0.16.6

type AudioContentPart struct {
	// The audio information.
	Audio Audio
}

An audio content part.

func NewAudioContentPartFromData added in v0.16.6

func NewAudioContentPartFromData(data []byte, format string) *AudioContentPart

Creates a new audio content part from raw audio data. The model must support audio input for this to work. The format parameter must be a valid audio format supported by the model, such as "wav" or "mp3".

func (AudioContentPart) MarshalJSON added in v0.16.6

func (m AudioContentPart) MarshalJSON() ([]byte, error)

Implements the json.Marshaler interface to serialize the AudioContentPart object.

func (*AudioContentPart) Type added in v0.16.6

func (*AudioContentPart) Type() string

The type of this content part, in this case "input_audio".

type AudioOutput added in v0.16.6

type AudioOutput struct {

	// Unique identifier for this audio response.
	Id string `json:"id"`

	// The time at which this audio content will no longer be accessible on the server for use in multi-turn conversations.
	ExpiresAt time.Time `json:"expires_at"`

	// The raw audio data, in the format specified in the request.
	Data []byte `json:"data"`

	// Transcript of the audio generated by the model.
	Transcript string `json:"transcript"`
}

An audio output object generated by the model when using audio modality.

func (*AudioOutput) UnmarshalJSON added in v0.16.6

func (a *AudioOutput) UnmarshalJSON(data []byte) error

Implements the json.Unmarshaler interface to deserialize the AudioOutput object.

type AudioParameters added in v0.16.6

type AudioParameters struct {

	// The voice the model should use for audio output, such as "ash" or "ballad".
	// See the model's documentation for a list of all supported voices.
	Voice string `json:"voice"`

	// The format of the audio data, such as "wav" or "mp3".
	// See the model's documentation for a list of all supported formats.
	Format string `json:"format"`
}

Parameters for audio output. Required when audio output is requested in the modalities field.

func NewAudioParameters added in v0.16.6

func NewAudioParameters(voice, format string) *AudioParameters

Creates a new audio output parameters object.

  • The voice parameter is the voice the model should use for audio output, such as "ash" or "ballad".
  • The format parameter is the format of the audio data, such as "wav" or "mp3".

See the model's documentation for a list of all supported voices and formats.

type AudioRef added in v0.16.6

type AudioRef struct {

	// Unique identifier for a previous audio response from the model.
	Id string `json:"id"`
}

Represents a reference to a previous audio response from the model.

type ChatModel

type ChatModel struct {
	// contains filtered or unexported fields
}

Provides input and output types that conform to the OpenAI Chat API, as described in the API Reference docs.

func (*ChatModel) CreateInput

func (m *ChatModel) CreateInput(messages ...RequestMessage) (*ChatModelInput, error)

Creates an input object for the OpenAI Chat API.

type ChatModelInput

type ChatModelInput struct {

	// The name of the model to use for the chat.
	//
	// Must be the exact string expected by the model provider.
	// For example, "gpt-3.5-turbo".
	Model string `json:"model"`

	// The list of messages to send to the chat model.
	Messages []RequestMessage `json:"messages"`

	// Output types that you want the model to generate.
	// Text modality is implied if no modalities are specified.
	Modalities []Modality `json:"modalities,omitempty"`

	// Parameters for audio output.
	// Required when audio output is requested in the modalities field.
	Audio *AudioParameters `json:"audio,omitempty"`

	// Number between -2.0 and 2.0.
	//
	// Positive values penalize new tokens based on their existing frequency in the text so far,
	// decreasing the model's likelihood to repeat the same line verbatim.
	FrequencyPenalty float64 `json:"frequency_penalty,omitempty"`

	// Modifies the likelihood of specified tokens appearing in the completion.
	//
	// Accepts an object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100.
	// Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model,
	// but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban
	// or exclusive selection of the relevant token.
	LogitBias map[string]float64 `json:"logit_bias,omitempty"`

	// Whether to return log probabilities of the output tokens or not,
	//
	// If true, returns the log probabilities of each output token returned in the content of message.
	Logprobs bool `json:"logprobs,omitempty"`

	// An integer between 0 and 20 specifying the number of most likely tokens to return at each token position,
	// each with an associated log probability. [Logprobs] must be set to true if this parameter is used.
	TopLogprobs int `json:"top_logprobs,omitempty"`

	// The maximum number of tokens to generate in the chat completion.
	//
	// The default (0) is equivalent to 4096.
	//
	// Deprecated: Use the MaxCompletionTokens parameter instead, unless the model specifically requires passing "max_tokens".
	MaxTokens int `json:"max_tokens,omitempty"`

	// The maximum number of tokens to generate in the chat completion.
	//
	// The default (0) is equivalent to 4096.
	MaxCompletionTokens int `json:"max_completion_tokens,omitempty"`

	// The number of completions to generate for each prompt.
	//
	// The default (0) is equivalent to 1.
	N int `json:"n,omitempty"`

	// Number between -2.0 and 2.0.
	//
	// Positive values penalize new tokens based on whether they appear in the text so far,
	// increasing the model's likelihood to talk about new topics.
	PresencePenalty float64 `json:"presence_penalty,omitempty"`

	// Specifies the requested format for the response.
	//  - ResponseFormatText requests a plain text string.
	//  - ResponseFormatJson requests a JSON object.
	//  - ResponseFormatJsonSchema requests a JSON object that conforms to the provided JSON schema.
	//
	// The default is ResponseFormatText.
	ResponseFormat ResponseFormat `json:"response_format"`

	// If specified, the model will make a best effort to sample deterministically,
	// such that repeated requests with the same seed and parameters should return
	// the same result.
	//
	// Determinism is not guaranteed, and you should use the SystemFingerprint response
	// parameter to monitor changes in the backend.
	//
	// The default (0) is equivalent to a random seed.
	Seed int `json:"seed,omitempty"`

	// Specifies the latency tier to use for processing the request.
	// This is relevant for customers subscribed to the scale tier service of the model hosting platform.
	//
	// - If set to 'ServiceTierAuto', and the Project is Scale tier enabled, the system will utilize scale tier credits until they are exhausted.
	// - If set to 'ServiceTierAuto', and the Project is not Scale tier enabled, the request will be processed using the default service tier with a lower uptime SLA and no latency guarantee.
	// - If set to 'ServiceTierDefault', the request will be processed using the default service tier with a lower uptime SLA and no latency guarantee.
	// - When not set, the default behavior is 'ServiceTierAuto'.
	//
	// When this parameter is set, the response `serviceTier` property will indicate the service tier utilized.
	ServiceTier ServiceTier `json:"service_tier,omitempty"`

	// Up to 4 sequences where the API will stop generating further tokens.
	Stop []string `json:"stop,omitempty"`

	// A number between 0.0 and 2.0 that controls the sampling temperature.
	//
	// Higher values like 0.8 will make the output more random, while lower values
	// like 0.2 will make it more focused and deterministic.
	//
	// We generally recommend altering this or TopP but not both.
	//
	// The default value is 1.0.
	Temperature float64 `json:"temperature"`

	// An alternative to sampling with temperature, called nucleus sampling, where the model
	// considers the results of the tokens with TopP probability mass.
	//
	// For example, 0.1 means only the tokens comprising the top 10% probability mass are considered.
	//
	// We generally recommend altering this or Temperature but not both.
	//
	// The default value is 1.0.
	TopP float64 `json:"top_p"`

	// A list of tools the model may call. Currently, only functions are supported as a tool.
	// Use this to provide a list of functions the model may generate JSON inputs for.
	// A max of 128 functions are supported.
	Tools []Tool `json:"tools,omitempty"`

	// Controls which (if any) tool is called by the model.
	//  - ToolChoiceNone means the model will not call any tool and instead generates a message.
	//  - ToolChoiceAuto means the model can pick between generating a message or calling one or more tools.
	//  - ToolChoiceRequired means the model must call one or more tools.
	//  - ToolChoiceFunction(name) forces the model to call a specific tool.
	//
	// The default is ToolChoiceAuto when tools are present, and ToolChoiceNone otherwise.
	ToolChoice ToolChoice `json:"tool_choice,omitempty"`

	// Whether to enable parallel function calling during tool use.
	//
	// The default is true.
	ParallelToolCalls bool `json:"parallel_tool_calls"`

	// The user ID to associate with the request, as described in the [documentation].
	// If not specified, the request will be anonymous.
	//
	// [documentation]: https://platform.openai.com/docs/guides/safety-best-practices/end-user-ids
	User string `json:"user,omitempty"`
}

The input object for the OpenAI Chat API.

func (*ChatModelInput) MarshalJSON

func (mi *ChatModelInput) MarshalJSON() ([]byte, error)

Implements the json.Marshaler interface to serialize the ChatModelInput object.

func (*ChatModelInput) RequestAudioOutput added in v0.16.6

func (ci *ChatModelInput) RequestAudioOutput(voice, format string)

Requests audio modality and sets the audio parameters for the input object.

  • The voice parameter is the voice the model should use for audio output, such as "ash" or "ballad".
  • The format parameter is the format of the audio data, such as "wav" or "mp3".

See the model's documentation for a list of all supported voices and formats.

type ChatModelOutput

type ChatModelOutput struct {

	// A unique identifier for the chat completion.
	Id string `json:"id"`

	// The name of the output object type returned by the API.
	// This will always be "chat.completion".
	Object string `json:"object"`

	// A list of chat completion choices. Can be more than one if n is greater than 1 in the input options.
	Choices []Choice `json:"choices"`

	// The timestamp of when the chat completion was created.
	Created time.Time `json:"created"`

	// The name of the model used to generate the chat.
	// In most cases, this will match the requested model field in the input.
	Model string `json:"model"`

	// The service tier used for processing the request.
	//
	// This field is only included if the ServiceTier parameter is specified in the request.
	ServiceTier ServiceTier `json:"service_tier"`

	// This fingerprint represents the OpenAI backend configuration that the model runs with.
	//
	// Can be used in conjunction with the Seed request parameter to understand when backend changes
	// have been made that might impact determinism.
	SystemFingerprint string `json:"system_fingerprint"`

	// The usage statistics for the request.
	Usage Usage `json:"usage"`
}

The output object for the OpenAI Chat API.

func (*ChatModelOutput) UnmarshalJSON added in v0.16.6

func (o *ChatModelOutput) UnmarshalJSON(data []byte) error

Implements the json.Unmarshaler interface to deserialize the ChatModelOutput object.

type Choice

type Choice struct {

	// The reason the model stopped generating tokens.
	//
	// Possible values are:
	//  - "stop" if the model hit a natural stop point or a provided stop sequence
	//  - "length" if the maximum number of tokens specified in the request was reached
	//  - "content_filter" if content was omitted due to a flag from content filters
	//  - "tool_calls" if the model called a tool
	FinishReason string `json:"finish_reason"`

	// The index of the choice in the list of choices.
	Index int `json:"index"`

	// A message generated by the model.
	Message CompletionMessage `json:"message"`

	// Log probability information for the choice.
	Logprobs Logprobs `json:"logprobs"`
}

A completion choice object returned in the response.

type CompletionMessage

type CompletionMessage struct {

	// The content of the message.
	Content string `json:"content"`

	// The refusal message generated by the model, if any.
	Refusal string `json:"refusal,omitempty"`

	// The tool calls generated by the model, such as function calls.
	ToolCalls []ToolCall `json:"tool_calls,omitempty"`

	// The audio output generated by the model, if any.
	// Used only when audio output is requested in the modalities field, and when the model and host support audio output.
	Audio *AudioOutput `json:"audio,omitempty"`
}

A chat completion message generated by the model.

Note that a completion message is not a valid request message. To use a completion message in a chat, convert it to an assistant message with the ToAssistantMessage method.

func (*CompletionMessage) Role added in v0.16.6

func (m *CompletionMessage) Role() string

The role of the author of this message, in this case "assistant".

func (*CompletionMessage) ToAssistantMessage added in v0.16.6

func (m *CompletionMessage) ToAssistantMessage() *AssistantMessage[string]

Converts the completion message to an assistant message, so it can be used in a conversation.

type ContentPart added in v0.16.6

type ContentPart interface {

	// The type of the content part.
	Type() string
	// contains filtered or unexported methods
}

An interface for a message content part.

type DeveloperContentPart added in v0.16.6

type DeveloperContentPart = SystemContentPart

An interface for a developer message content part.

type DeveloperMessage added in v0.16.6

type DeveloperMessage[T string | []DeveloperContentPart] struct {

	// The content of the message.
	Content T

	// An optional name for the participant.
	// Provides the model information to differentiate between participants of the same role.
	Name string
}

A developer message. Developer messages are used to provide setup instructions to the model.

Note that system and developer messages are identical in functionality, but the "system" role was renamed to "developer" in the OpenAI Chat API. Certain models may require one or the other, so use the type that matches the model's requirements.

func NewDeveloperMessage added in v0.16.6

func NewDeveloperMessage[T string | []DeveloperContentPart](content T) *DeveloperMessage[T]

Creates a new developer message object.

Note that system and developer messages are identical in functionality, but the "system" role was renamed to "developer" in the OpenAI Chat API. Certain models may require one or the other, so use the type that matches the model's requirements.

func NewDeveloperMessageFromParts added in v0.16.6

func NewDeveloperMessageFromParts(parts ...DeveloperContentPart) *DeveloperMessage[[]DeveloperContentPart]

Creates a new developer message object from multiple content parts.

Note that system and developer messages are identical in functionality, but the "system" role was renamed to "developer" in the OpenAI Chat API. Certain models may require one or the other, so use the type that matches the model's requirements.

func (DeveloperMessage[T]) MarshalJSON added in v0.16.6

func (m DeveloperMessage[T]) MarshalJSON() ([]byte, error)

Implements the json.Marshaler interface to serialize the DeveloperMessage object.

func (*DeveloperMessage[T]) Role added in v0.16.6

func (m *DeveloperMessage[T]) Role() string

The role of the author of this message, in this case "developer".

type Embedding

type Embedding struct {

	// The name of the output object type returned by the API.
	// This will always be "embedding".
	Object string `json:"object"`

	// The index of the input text that corresponds to this embedding.
	// Used when requesting embeddings for multiple texts.
	Index int `json:"index"`

	// The vector embedding of the input text.
	Embedding []float32 `json:"embedding"`
}

The output vector embeddings data.

type EmbeddingsModel

type EmbeddingsModel struct {
	// contains filtered or unexported fields
}

Provides input and output types that conform to the OpenAI Embeddings API, as described in the API Reference docs.

func (*EmbeddingsModel) CreateInput

func (m *EmbeddingsModel) CreateInput(content any) (*EmbeddingsModelInput, error)

Creates an input object for the OpenAI Embeddings API.

The content parameter can be any of:

  • A string representing the text to vectorize.
  • A slice of strings representing multiple texts to vectorize.
  • A slice of integers representing pre-tokenized text to vectorize.
  • A slice of slices of integers representing multiple pre-tokenized texts to vectorize.

NOTE: The input content must not exceed the maximum token limit of the model.

type EmbeddingsModelInput

type EmbeddingsModelInput struct {

	// The name of the model to use to generate the embeddings.
	//
	// Must be the exact string expected by the model provider.
	// For example, "text-embedding-3-small".
	Model string `json:"model"`

	// The input content to vectorize.
	Input any `json:"input"`

	// The format for the output embeddings.
	// The default ("") is equivalent to [EncodingFormatFloat], which is currently the only supported format.
	EncodingFormat EncodingFormat `json:"encoding_format,omitempty"`

	// The maximum number of dimensions for the output embeddings.
	// The default (0) indicates that the model's default number of dimensions will be used.
	Dimensions int `json:"dimensions,omitempty"`

	// The user ID to associate with the request, as described in the [documentation].
	// If not specified, the request will be anonymous.
	//
	// [documentation]: https://platform.openai.com/docs/guides/safety-best-practices/end-user-ids
	User string `json:"user,omitempty"`
}

The input object for the OpenAI Embeddings API.

type EmbeddingsModelOutput

type EmbeddingsModelOutput struct {

	// The name of the output object type returned by the API.
	// This will always be "list".
	Object string `json:"object"`

	// The name of the model used to generate the embeddings.
	// In most cases, this will match the requested model field in the input.
	Model string `json:"model"`

	// The usage statistics for the request.
	Usage Usage `json:"usage"`

	// The output vector embeddings data.
	Data []Embedding `json:"data"`
}

The output object for the OpenAI Embeddings API.

type EncodingFormat

type EncodingFormat string

The encoding format for the output embeddings.

const (
	// The output embeddings are encoded as an array of floating-point numbers.
	EncodingFormatFloat EncodingFormat = "float"

	// The output embeddings are encoded as a base64-encoded string,
	// containing an binary representation of an array of floating-point numbers.
	//
	// NOTE: This format is not currently supported.
	EncodingFormatBase64 EncodingFormat = "base64"
)

type FunctionCall

type FunctionCall struct {

	// The name of the function to call.
	Name string `json:"name"`

	// The arguments to call the function with, as generated by the model in JSON format.
	Arguments string `json:"arguments"`
}

A function call object that the model may generate.

type FunctionDefinition

type FunctionDefinition struct {
	// The name of the function to be called.
	//
	// Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
	Name string `json:"name"`

	// An optional description of what the function does, used by the model to choose when and how to call the function.
	Description string `json:"description,omitempty"`

	// Whether to enable strict schema adherence when generating the function call.
	// If set to true, the model will follow the exact schema defined in the parameters field.
	//
	// The default is false.
	//
	// See https://platform.openai.com/docs/guides/function-calling
	//
	// NOTE:
	// In order to guarantee strict schema adherence, disable parallel function calls
	// by setting ParallelToolCalls to false.
	//
	// See https://platform.openai.com/docs/guides/function-calling/parallel-function-calling-and-structured-outputs
	Strict bool `json:"strict,omitempty"`

	// The parameters the functions accepts, described as a JSON Schema object.
	//
	// See https://platform.openai.com/docs/guides/function-calling
	Parameters utils.RawJsonString `json:"parameters,omitempty"`
}

The definition of a function that can be called by the model.

type Image added in v0.16.6

type Image struct {

	// The URL of the image.
	Url string `json:"url"`

	// An optional detail string for the image.
	// Can be set to "low", "high", or "auto".
	// The default is "auto".
	Detail string `json:"detail,omitempty"`
}

An image object, used to represent an image in a content part.

type ImageContentPart added in v0.16.6

type ImageContentPart struct {
	// The image information.
	Image Image
}

An image content part.

func NewImageContentPartFromData added in v0.16.6

func NewImageContentPartFromData(data []byte, contentType string, detail ...string) *ImageContentPart

Creates a new image content part from raw image data. The model must support image input for this to work. The contentType parameter must be a valid image MIME type supported by the model, such as "image/jpeg" or "image/png". The detail parameter is optional and can be set to "low", "high", or "auto".

func NewImageContentPartFromUrl added in v0.16.6

func NewImageContentPartFromUrl(url string, detail ...string) *ImageContentPart

Creates a new image content part from a URL. The model must support image input for this to work. The URL will be sent directly to the model. The detail parameter is optional and can be set to "low", "high", or "auto".

func (ImageContentPart) MarshalJSON added in v0.16.6

func (m ImageContentPart) MarshalJSON() ([]byte, error)

Implements the json.Marshaler interface to serialize the ImageContentPart object.

func (*ImageContentPart) Type added in v0.16.6

func (*ImageContentPart) Type() string

The type of this content part, in this case "image_url".

type Logprobs

type Logprobs struct {
	// A list of message content tokens with log probability information.
	Content []LogprobsContent `json:"content"`
}

Log probability information for a choice.

type LogprobsContent

type LogprobsContent struct {
	LogprobsContentObject

	// List of the most likely tokens and their log probability, at this token position.
	// In rare cases, there may be fewer than the number of requested TopLogprobs returned.
	TopLogprobs []LogprobsContentObject `json:"top_logprobs"`
}

Log probability information for a message content token.

type LogprobsContentObject

type LogprobsContentObject struct {
	// The token.
	Token string `json:"token"`

	// The log probability of this token, if it is within the top 20 most likely tokens.
	// Otherwise, the value -9999.0 is used to signify that the token is very unlikely.
	Logprob float64 `json:"logprob"`

	// A list of integers representing the UTF-8 bytes representation of the token.
	//
	// Useful in instances where characters are represented by multiple tokens and their byte
	// representations must be combined to generate the correct text representation.
	// Can be nil if there is no bytes representation for the token.
	Bytes []byte `json:"bytes"`
}

Log probability information for the most likely tokens at a given position.

type Modality added in v0.16.6

type Modality string

A type that represents the modality of the chat.

const (
	// Text modality requests the model to respond with text.
	// This is the default if no other modality is requested.
	ModalityText Modality = "text"

	// Audio modality requests the model to respond with spoken audio.
	// The model and host must support audio output for this to work.
	// Most models that support audio require both text and audio modalities to be specified,
	// but the text will come as a transcript in the audio response.
	ModalityAudio Modality = "audio"
)

type RefusalContentPart added in v0.16.6

type RefusalContentPart struct {
	// The refusal message generated by the model.
	Refusal string
}

A refusal content part.

func NewRefusalContentPart added in v0.16.6

func NewRefusalContentPart(refusal string) *RefusalContentPart

Creates a new refusal content part.

func (RefusalContentPart) MarshalJSON added in v0.16.6

func (m RefusalContentPart) MarshalJSON() ([]byte, error)

Implements the json.Marshaler interface to serialize the RefusalContentPart object.

func (*RefusalContentPart) Type added in v0.16.6

func (*RefusalContentPart) Type() string

The type of this content part, in this case "refusal".

type RequestMessage added in v0.16.6

type RequestMessage interface {
	json.Marshaler

	// The role of the author of this message.
	Role() string
}

An interface to any request message.

type ResponseFormat

type ResponseFormat struct {

	// The type of response format.
	Type string `json:"type"`

	// The JSON schema to use for the response format.
	JsonSchema utils.RawJsonString `json:"json_schema,omitempty"`
}

An object specifying the format that the model must output.

var (
	// Instructs the model to output the response as a plain text string.
	// This is the default response format.
	ResponseFormatText ResponseFormat = ResponseFormat{Type: "text"}

	// Instructs the model to output the response as a JSON object.
	//  - You must also instruct the model to produce JSON yourself via a system or user message.
	//  - Additionally, if you need an array you must ask for an object that wraps the array,
	//    because the model will not reliably produce arrays directly (ie., there is no "json_array" option).
	ResponseFormatJson ResponseFormat = ResponseFormat{Type: "json_object"}

	// Enables Structured Outputs which guarantees the model will match your supplied JSON schema.
	//
	// See https://platform.openai.com/docs/guides/structured-outputs
	ResponseFormatJsonSchema = func(jsonSchema string) ResponseFormat {
		return ResponseFormat{Type: "json_schema", JsonSchema: utils.RawJsonString(jsonSchema)}
	}
)

type ServiceTier

type ServiceTier string

The OpenAI service tier used to process the request.

const (
	// The OpenAI system will utilize scale tier credits until they are exhausted.
	ServiceTierAuto ServiceTier = "auto"

	// The request will be processed using the default OpenAI service tier with a lower
	// uptime SLA and no latency guarantee.
	ServiceTierDefault ServiceTier = "default"
)

type SystemContentPart added in v0.16.6

type SystemContentPart interface {
	ContentPart
	// contains filtered or unexported methods
}

An interface for a system message content part.

type SystemMessage

type SystemMessage[T string | []SystemContentPart] struct {
	// The content of the message.
	Content T

	// An optional name for the participant.
	// Provides the model information to differentiate between participants of the same role.
	Name string
}

A system message. System messages are used to provide setup instructions to the model.

Note that system and developer messages are identical in functionality, but the "system" role was renamed to "developer" in the OpenAI Chat API. Certain models may require one or the other, so use the type that matches the model's requirements.

func NewSystemMessage

func NewSystemMessage[T string | []SystemContentPart](content T) *SystemMessage[T]

Creates a new system message object.

Note that system and developer messages are identical in functionality, but the "system" role was renamed to "developer" in the OpenAI Chat API. Certain models may require one or the other, so use the type that matches the model's requirements.

func NewSystemMessageFromParts added in v0.16.6

func NewSystemMessageFromParts(parts ...SystemContentPart) *SystemMessage[[]SystemContentPart]

Creates a new system message object from multiple content parts.

Note that system and developer messages are identical in functionality, but the "system" role was renamed to "developer" in the OpenAI Chat API. Certain models may require one or the other, so use the type that matches the model's requirements.

func (SystemMessage[T]) MarshalJSON added in v0.16.6

func (m SystemMessage[T]) MarshalJSON() ([]byte, error)

Implements the json.Marshaler interface to serialize the SystemMessage object.

func (*SystemMessage[T]) Role added in v0.16.6

func (m *SystemMessage[T]) Role() string

The role of the author of this message, in this case "system".

type TextContentPart added in v0.16.6

type TextContentPart struct {
	// The text string.
	Text string
}

A text content part.

func NewTextContentPart added in v0.16.6

func NewTextContentPart(text string) *TextContentPart

Creates a new text content part.

func (TextContentPart) MarshalJSON added in v0.16.6

func (m TextContentPart) MarshalJSON() ([]byte, error)

Implements the json.Marshaler interface to serialize the TextContentPart object.

func (*TextContentPart) Type added in v0.16.6

func (*TextContentPart) Type() string

The type of this content part, in this case "text".

type Tool

type Tool struct {

	// The type of the tool. Currently, only `"function"` is supported.
	Type string `json:"type"`

	// The definition of the function.
	Function FunctionDefinition `json:"function"`
	// contains filtered or unexported fields
}

A tool object that the model may call.

func NewToolForFunction added in v0.16.6

func NewToolForFunction(name, description string) Tool

Creates a new tool object for a function.

func (Tool) WithParameter added in v0.16.6

func (t Tool) WithParameter(name, jsonSchemaType, description string) Tool

Adds a parameter to the function used by the tool. Note that the type must be a valid JSON Schema type, not a Go type. For example, use "integer", not "int32".

func (Tool) WithParametersSchema added in v0.16.6

func (t Tool) WithParametersSchema(jsonSchema string) Tool

Sets the JSON Schema for the parameters of the function used by the tool. Use this for defining complex parameters. Prefer WithParameter for adding simple parameters, which will generate the schema for you automatically.

type ToolCall

type ToolCall struct {

	// The ID of the tool call.
	Id string `json:"id"`

	// The type of the tool. Currently, only `"function"` is supported.
	Type string `json:"type"`

	// The function that the model called.
	Function FunctionCall `json:"function"`
}

A tool call object that the model may generate.

type ToolChoice

type ToolChoice struct {

	// The type of tool to call.
	Type string `json:"type"`

	// The function to call.
	Function struct {
		// The name of the function to call.
		Name string `json:"name"`
	} `json:"function,omitempty"`
}

An object specifying which tool the model should call.

var (
	// Directs the model to not call any tool and instead generates a message.
	ToolChoiceNone ToolChoice = ToolChoice{Type: "none"}

	// Directs the model to pick between generating a message or calling one or more tools.
	ToolChoiceAuto ToolChoice = ToolChoice{Type: "auto"}

	// Directs that the model must call one or more tools.
	ToolChoiceRequired ToolChoice = ToolChoice{Type: "required"}

	// Forces the model to call a specific tool.
	ToolChoiceFunction = func(name string) ToolChoice {
		c := ToolChoice{Type: "function"}
		c.Function.Name = name
		return c
	}
)

func (ToolChoice) MarshalJSON

func (tc ToolChoice) MarshalJSON() ([]byte, error)

Implements the json.Marshaler interface to serialize the ToolChoice object.

type ToolMessage

type ToolMessage[T string | []ToolMessageContentPart] struct {

	// The content of the message.
	Content T

	// The tool call that this message is responding to.
	ToolCallId string
}

A tool message object.

func NewToolMessage

func NewToolMessage(content any, toolCallId string) *ToolMessage[string]

Creates a new tool message object. If the content is a string, it will be passed through unaltered. If the content is an error object, its error message will be used as the content. Otherwise, the object will be JSON serialized and sent as a JSON string. This function will panic if the object cannot be serialized to JSON.

func NewToolMessageFromParts added in v0.16.6

func NewToolMessageFromParts(toolCallId string, parts ...ToolMessageContentPart) *ToolMessage[[]ToolMessageContentPart]

Creates a new tool message object from multiple content parts.

func (ToolMessage[T]) MarshalJSON added in v0.16.6

func (m ToolMessage[T]) MarshalJSON() ([]byte, error)

Implements the json.Marshaler interface to serialize the ToolMessage object.

func (*ToolMessage[T]) Role added in v0.16.6

func (m *ToolMessage[T]) Role() string

The role of the author of this message, in this case "tool".

type ToolMessageContentPart added in v0.16.6

type ToolMessageContentPart interface {
	ContentPart
	// contains filtered or unexported methods
}

An interface for a tool content part.

type Usage

type Usage struct {

	// The number of completion tokens used.
	CompletionTokens int `json:"completion_tokens"`

	// The number of prompt tokens used.
	PromptTokens int `json:"prompt_tokens"`

	// The total number of tokens used.
	TotalTokens int `json:"total_tokens"`
}

The usage statistics returned by the OpenAI API.

type UserMessage

type UserMessage[T string | []UserMessageContentPart] struct {

	// The content of the message.
	Content T

	// An optional name for the participant.
	// Provides the model information to differentiate between participants of the same role.
	Name string
}

A user message object.

func NewUserMessage

func NewUserMessage[T string | []UserMessageContentPart](content T) *UserMessage[T]

Creates a new user message object.

func NewUserMessageFromParts added in v0.16.6

func NewUserMessageFromParts(parts ...UserMessageContentPart) *UserMessage[[]UserMessageContentPart]

Creates a new user message object from multiple content parts.

func (UserMessage[T]) MarshalJSON added in v0.16.6

func (m UserMessage[T]) MarshalJSON() ([]byte, error)

Implements the json.Marshaler interface to serialize the UserMessage object.

func (*UserMessage[T]) Role added in v0.16.6

func (m *UserMessage[T]) Role() string

The role of the author of this message, in this case "user".

type UserMessageContentPart added in v0.16.6

type UserMessageContentPart interface {
	ContentPart
	// contains filtered or unexported methods
}

An interface for a user message content part.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL