openai

package
v0.0.0-...-80055c2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 15, 2023 License: MIT Imports: 6 Imported by: 0

Documentation

Index

Constants

View Source
const (
	// The user messages help instruct the assistant. They can be generated by
	// the end users of an application, or set by a developer as an
	// instruction.
	UserRole aiRole = "user"

	// The system message helps set the behavior of the assistant. E.g. the
	// assistant can be instructed with "You are a helpful assistant".
	SystemRole aiRole = "system"

	// The assistant messages help store prior responses. They can also be
	// written by a developer to help give examples of desired behavior.
	AssistantRole aiRole = "assistant"
)
View Source
const (
	// GPT3_5Turbo - The most capable GPT-3.5 model and optimized for chat at
	// 1/10th the cost of text-davinci-003. Will be updated with the latest
	// model iteration.
	// gpt-3.5-turbo is recomennded over the other GPT-3.5 model due to its
	// (lowest) cost.
	GPT3_5Turbo aiModel = "gpt-3.5-turbo"
)

Variables

This section is empty.

Functions

This section is empty.

Types

type ChatCompletionResponse

type ChatCompletionResponse struct {
	Created time.Time
	Model   aiModel

	// Cost is the cost for the request in dollars.
	Cost float64

	Messages []string
}

type Client

type Client struct {
	// contains filtered or unexported fields
}

Client is the OpenAI API client.

func NewClient

func NewClient(httpClient Doer, apiKey string) *Client

NewClient creates a new OpenAI API client.

func (*Client) ChatCompletionRequest

func (c *Client) ChatCompletionRequest(ctx context.Context, messages []Message, model aiModel, temperature float32) (ChatCompletionResponse, error)

ChatCompletionRequest does a request to the openai chat completion API.

temperature decides how deterministic the model is in generating a response. It must be a value between 0 and 1 (inclusive). A lower temperature means that completions will be more accurate and deterministic. A higher temperature value means that the completions will be more diverse. See more about temperature here: https://platform.openai.com/docs/quickstart/adjust-your-settings

type Coster

type Coster interface {
	// Cost returns the cost in dollars of the request based on the total
	// tokens used.
	Cost(totalTokens int) float64
}

Coster is an interface that models can implement to calculate the cost of request based on the total tokens used.

type Doer

type Doer interface {
	Do(req *http.Request) (*http.Response, error)
}

type Message

type Message struct {
	// Role is the role of the message.
	Role aiRole `json:"role"`
	// Content is the message to send to the OpenAI model.
	Content string `json:"content"`
}

Message is used to interact with the OpenAI model.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL