Documentation
¶
Index ¶
Constants ¶
View Source
const ( // The user messages help instruct the assistant. They can be generated by // the end users of an application, or set by a developer as an // instruction. UserRole aiRole = "user" // The system message helps set the behavior of the assistant. E.g. the // assistant can be instructed with "You are a helpful assistant". SystemRole aiRole = "system" // The assistant messages help store prior responses. They can also be // written by a developer to help give examples of desired behavior. AssistantRole aiRole = "assistant" )
View Source
const ( // GPT3_5Turbo - The most capable GPT-3.5 model and optimized for chat at // 1/10th the cost of text-davinci-003. Will be updated with the latest // model iteration. // gpt-3.5-turbo is recomennded over the other GPT-3.5 model due to its // (lowest) cost. GPT3_5Turbo aiModel = "gpt-3.5-turbo" )
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type ChatCompletionResponse ¶
type Client ¶
type Client struct {
// contains filtered or unexported fields
}
Client is the OpenAI API client.
func (*Client) ChatCompletionRequest ¶
func (c *Client) ChatCompletionRequest(ctx context.Context, messages []Message, model aiModel, temperature float32) (ChatCompletionResponse, error)
ChatCompletionRequest does a request to the openai chat completion API.
temperature decides how deterministic the model is in generating a response. It must be a value between 0 and 1 (inclusive). A lower temperature means that completions will be more accurate and deterministic. A higher temperature value means that the completions will be more diverse. See more about temperature here: https://platform.openai.com/docs/quickstart/adjust-your-settings
Click to show internal directories.
Click to hide internal directories.