jpf

package module
v0.8.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 14, 2025 License: MIT Imports: 22 Imported by: 2

README

Go Report Card Go Ref

jpf - A Lightweight Framework for AI-Powered Applications

jpf is a Go library for building lightweight AI-powered applications. It provides essential building blocks and robust LLM interaction interfaces, enabling you to craft custom solutions without the bloat.

jpf is aimed at using AI as a tool - not as a chatbot (this is not to say you cannot use it to make a chatbot, however there is no framework provided for this yet). It focusses on adding AI features locally, as opposed to relying too heavily on external APIs - this makes the package particularly flexible when switching models or providers.

Table of Contents

Features

  • Retry and Feedback Handling: Resilient mechanisms for retrying tasks and incorporating feedback into interactions.
  • Customizable Models: Seamlessly integrate LLMs, including reasoning chains and hybrid models.
  • Token Usage Tracking: Stay informed of API token consumption for cost-effective development.
  • Easy-to-use Caching: Reduce the calls made to models by composing a caching layer onto an existing model.
  • Out-of-the-box Logging: Simply add logging messages to your models, helping yuo track down issues.

Installation

Install jpf in your Go project via:

go get github.com/JoshPattman/jpf

License

This project is licensed under the MIT License. See the LICENSE file for details.

Contributing

Contributions are welcome! Open an issue or submit a pull request on GitHub.

FAQ

  • Will streaming (token-by-token) ever be supported?
    • No. This framework is designed to be more of a back-end tool, and character streaming would add extra complexity that most applications of this package would not benefit from (in my opinion).
  • Are there any pre-built formatters / parsers?
    • There are a few built in implementations, however the aim of this package is to create the framework, not the functionality.
    • If you have any ideas of useful functions, feel free to put them on an issue, and if enough arise, I can make a new repo for these.
  • Where are the agents?
    • This package is tries to simplify single calls to LLMs, which is a level below what agents do.
    • I have plans to build an agent framework on top of this package, but I would like to build a strong foundation first.
  • Why does this not support MCP tools on the OpenAI API / Tool calling / Other advanced API feature?
    • The aim of this package is to put the advanced stuff, like using tools, to you to figure out. IMO this allows you to do cooler, more flexible things (like a tree of agents).
    • Also, to a degree tool calls / MCP tools lock you in to one API or another, more than just using the chat completions endpoint.
    • I might consider adding them in the future, but for now I think that implementing your own tool calling is best.
    • As a rule of thumb, I will add API features that fiddle with the log probs (e.g. structured output, temperature, top p, ...) but I will not add somthing if a model could not acheive the same result with perfect prompting.
  • I want to change my models temperature/structured output/output tokens/... after I have built it!
    • The intention is to provide functions that need to use an LLM with a builder function instead of a built object. This way, you can use the builder function multiple times with different paramters.
    • IMO, a model only exposing its respond fucntion is cleanest and simplest.

Author

Developed by Josh Pattman. Learn more at GitHub.

Core Concepts

  • jpf aims to separate the various components of building a robust interation with an LLM for three main reasons:
    • Resuability: Build up a set of components you find useful, and write less repeated code.
    • Flexibility: Write code in a way that easily allows you to extend the LLM's capabilities - for example you can add cache to an LLM without changing a single line of buisness logic.
    • Testability: Each component being an atomic piece of logic allows you to unit test and mock each and every piece of logic in isolation.
  • Below are the core components you will need to understand to write code with jpf:
Model
  • Models are the core component of jpf - they wrap an LLM with some additional logic in a consistent interface.
// Model defines an interface to an LLM.
type Model interface {
	// Responds to a set of input messages.
	Respond([]Message) (ModelResponse, error)
}

type ModelResponse struct {
	// Extra messages that are not the final response,
	// but were used to build up the final response.
	// For example, reasoning messages.
	AuxilliaryMessages []Message
	// The primary repsponse to the users query.
	// Usually the only response that matters.
	PrimaryMessage Message
	// The usage of making this call.
	// This may be the sum of multiple LLM calls.
	Usage Usage
}

// Message defines a text message to/from an LLM.
type Message struct {
	Role    Role
	Content string
	Images  []ImageAttachment
}
  • Models are built using composition - you can produce a very powerful model by stacking up multiple less powerful models together.
    • The power with this approach is you can abstract away a lot of the complexity from your client code, allowing it to focus primarily on buisness logic.
// All model constructors in jpf return the Model interface,
// we can re-use our variable as we build it up.
var model jpf.Model

// Switch, based on a boolean variable, if we should use Gemini or OpenAI.
// If using Gemini, we will scale the temperature down a bit (NOT useful - just for demonstration).
if useGemini {
    model = jpf.NewGeminiModel(apiKey, modelName, jpf.WithTemperature{X: temperature*0.8})
} else {
    model = jpf.NewOpenAIModel(apiKey, modelName, jpf.WithTemperature{X: temperature})
}

// Add retrying on API fails to the model.
// This will retry calling the child model multiple times upon an error.
if retries > 0 {
    model = jpf.NewRetryModel(model, retries, jpf.WithDelay{X: time.Second})
}

// Add cache to the model.
// This will skip calling out to the model if the same messages are requested a second time.
if cache != nil {
    model = jpf.NewCachedModel(model, cache)
}

// We now have a model that may/may not be gemini / openai, with retrying and cache.
// However, the client code does not need to know about any of this - to it we are still just calling a model!
Message Encoder
  • A MessageEncoder provides an interface to take a specific typed object and produce some messages for the LLM.
    • It does not actually make a call to the Model, and it does not decode the response.
// MessageEncoder encodes a structured piece of data into a set of messages for an LLM.
type MessageEncoder[T any] interface {
	BuildInputMessages(T) ([]Message, error)
}
  • For more complex tasks, you may choose to implement this yourself, however there are some useful encoders built in (or use a combination of both):
// NewRawStringMessageEncoder creates a MessageEncoder that encodes a system prompt and user input as raw string messages.
func NewRawStringMessageEncoder(systemPrompt string) MessageEncoder[string] {...}

// NewTemplateMessageEncoder creates a MessageEncoder that uses Go's text/template for formatting messages.
// It accepts templates for both system and user messages, allowing dynamic content insertion.
func NewTemplateMessageEncoder[T any](systemTemplate, userTemplate string) MessageEncoder[T] {...}

// Create a new message encoder that appends the results of running each message encoder sequentially.
// Useful, for example, to have a templating system / user message encoder, and a custom agent history message encoder after.
func NewSequentialMessageEncoder[T any](msgEncs ...MessageEncoder[T]) MessageEncoder[T] {...}
Response Decoder
  • A ResponseDecoder parses the output of an LLM into structured data.
  • As with message encoders, they do not make any LLM calls.
// ResponseDecoder converts an LLM response into a structured piece of data.
// When the LLM response is invalid, it should return ErrInvalidResponse (or an error joined on that).
type ResponseDecoder[T any] interface {
	ParseResponseText(string) (T, error)
}
  • You may choose to implement your own response decoder, however in my experience a json object is usually sufficient output.
  • When an error in response format is detected, the response decoder must return an error that, at some point in its chain, is an ErrInvalidResponse (this will be explained in the Map Func section).
  • There are some pre-defined response decoders inculded with jpf:
// NewRawStringResponseDecoder creates a ResponseDecoder that returns the response as a raw string without modification.
func NewRawStringResponseDecoder() ResponseDecoder[string] {...}

// NewJsonResponseDecoder creates a ResponseDecoder that tries to parse a json object from the response.
// It can ONLY parse json objects with an OBJECT as top level (i.e. it cannot parse a list directly).
func NewJsonResponseDecoder[T any]() ResponseDecoder[T] {...}

// Wrap an existing response decoder with one that takes only the part of interest of the response into account.
// The part of interest is determined by the substring function.
// If an error is detected when getting the substring, ErrInvalidResponse is raised.
func NewSubstringResponseDecoder[T any](decoder ResponseDecoder[T], substring func(string) (string, error)) ResponseDecoder[T] {...}

// Creates a response decoder that wraps the provided one,
// but then performs an extra validation step on the parsed response.
// If an error is found during validation, the error is wrapped with ErrInvalidResponse and returned.
func NewValidatingResponseDecoder[T any](decoder ResponseDecoder[T], validate func(T) error) ResponseDecoder[T] {...}
Map Func
  • A MapFunc is a collection of a MessageEncoder, ResponseDecoder, Model, and some additional logic.
  • Your buisness logic should only ever be interacting with LLMs though a Map Func.
  • It is a very generic interface, but it is intended to only ever be used for LLM-based functionality.
// MapFunc transforms input of type T into output of type U using an LLM.
// It handles the encoding of input, interaction with the LLM, and decoding of output.
type MapFunc[T, U any] interface {
	Call(T) (U, Usage, error)
}
  • It is not really expected that users will implement their own Map Funcs, but that is absolutely possible.
  • jpf ships with two built-in Map Funcs:
// NewOneShotMapFunc creates a MapFunc that first runs the encoder, then the model, finally parsing the response with the decoder.
func NewOneShotMapFunc[T, U any](enc MessageEncoder[T], pars ResponseDecoder[U], model Model) MapFunc[T, U] {...}

// NewFeedbackMapFunc creates a MapFunc that first runs the encoder, then the model, finally parsing the response with the decoder.
// However, it adds feedback to the conversation when errors are detected.
// It will only add to the conversation if the error returned from the parser is an ErrInvalidResponse (using errors.Is).
func NewFeedbackMapFunc[T, U any](
	enc MessageEncoder[T],
	pars ResponseDecoder[U],
	fed FeedbackGenerator,
	model Model,
	feedbackRole Role,
	maxRetries int,
) MapFunc[T, U] {...}

// Creates a map func that first tries to ask the first model,
// and if that produces an invalid format will try to ask the next models
// until a valid format is found.
// This is useful, for example, to try a second time with a model that overwrites the cache.
func NewModelFallbackOneShotMapFunc[T, U any](
	enc MessageEncoder[T],
	dec ResponseDecoder[U],
	models ...Model,
) MapFunc[T, U] {...}
  • Notice in the above, we have introduced a second place for retries to occur - this is intentional.
    • API-level errors should be retried at the Model level - these are errors that are not the fault of the LLM.
    • LLM response errors should be retried at the MapFunc level - these are errors where the LLM has responded with an invalid response, and we would like to tell it what it did wrong and ask again.
  • However, if you choose not to use these higher-level retries, you can simply use the one-shot map func.

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	ErrInvalidResponse = errors.New("llm produced an invalid response")
)

Functions

func HashMessages added in v0.6.0

func HashMessages(salt string, inputs []Message) string

func SubstringAfter added in v0.8.0

func SubstringAfter(split string) func(string) (string, error)

Create a new substringer that returns the last block of text after `split`. It will never error.

func TransformByPrefix added in v0.8.0

func TransformByPrefix(prefix string) func(string) string

Types

type CachedModelOpt added in v0.8.2

type CachedModelOpt interface {
	// contains filtered or unexported methods
}

type ConcurrentLimiter added in v0.7.0

type ConcurrentLimiter chan struct{}

func NewMaxConcurrentLimiter added in v0.7.0

func NewMaxConcurrentLimiter(n int) ConcurrentLimiter

NewMaxConcurrentLimiter creates a ConcurrentLimiter that allows up to n concurrent operations. The limiter is implemented as a buffered channel with capacity n.

func NewOneConcurrentLimiter added in v0.7.0

func NewOneConcurrentLimiter() ConcurrentLimiter

NewOneConcurrentLimiter creates a ConcurrentLimiter that allows only one operation at a time. This is a convenience function equivalent to NewMaxConcurrentLimiter(1).

type FakeReasoningModelOpt added in v0.8.0

type FakeReasoningModelOpt interface {
	// contains filtered or unexported methods
}

type FeedbackGenerator added in v0.6.0

type FeedbackGenerator interface {
	FormatFeedback(Message, error) string
}

FeedbackGenerator takes an error and converts it to a piece of text feedback to send to the LLM.

func NewRawMessageFeedbackGenerator added in v0.6.0

func NewRawMessageFeedbackGenerator() FeedbackGenerator

NewRawMessageFeedbackGenerator creates a FeedbackGenerator that formats feedback by returning the error message as a string.

type GeminiModelOpt added in v0.8.0

type GeminiModelOpt interface {
	// contains filtered or unexported methods
}

type ImageAttachment added in v0.7.0

type ImageAttachment struct {
	Source image.Image
}

func (*ImageAttachment) ToBase64Encoded added in v0.7.0

func (i *ImageAttachment) ToBase64Encoded(useCompression bool) (string, error)

type MapFunc added in v0.6.0

type MapFunc[T, U any] interface {
	Call(T) (U, Usage, error)
}

MapFunc transforms input of type T into output of type U using an LLM. It handles the encoding of input, interaction with the LLM, and decoding of output.

func NewFeedbackMapFunc added in v0.6.0

func NewFeedbackMapFunc[T, U any](
	enc MessageEncoder[T],
	pars ResponseDecoder[U],
	fed FeedbackGenerator,
	model Model,
	feedbackRole Role,
	maxRetries int,
) MapFunc[T, U]

NewFeedbackMapFunc creates a MapFunc that first runs the encoder, then the model, finally parsing the response with the decoder. However, it adds feedback to the conversation when errors are detected. It will only add to the conversation if the error returned from the parser is an ErrInvalidResponse (using errors.Is).

func NewModelFallbackOneShotMapFunc added in v0.8.1

func NewModelFallbackOneShotMapFunc[T, U any](
	enc MessageEncoder[T],
	dec ResponseDecoder[U],
	models ...Model,
) MapFunc[T, U]

Creates a map func that first tries to ask the first model, and if that produces an invalid format will try to ask the next models until a valid format is found. This is useful, for example, to try a second time with a model that overwrites the cache.

func NewOneShotMapFunc added in v0.6.0

func NewOneShotMapFunc[T, U any](
	enc MessageEncoder[T],
	dec ResponseDecoder[U],
	model Model,
) MapFunc[T, U]

NewOneShotMapFunc creates a MapFunc that first runs the encoder, then the model, finally parsing the response with the decoder.

type Message

type Message struct {
	Role    Role
	Content string
	Images  []ImageAttachment
}

Message defines a text message to/from an LLM.

type MessageEncoder added in v0.6.0

type MessageEncoder[T any] interface {
	BuildInputMessages(T) ([]Message, error)
}

MessageEncoder encodes a structured piece of data into a set of messages for an LLM.

func NewRawStringMessageEncoder added in v0.6.0

func NewRawStringMessageEncoder(systemPrompt string) MessageEncoder[string]

NewRawStringMessageEncoder creates a MessageEncoder that encodes a system prompt and user input as raw string messages.

func NewSequentialMessageEncoder added in v0.8.0

func NewSequentialMessageEncoder[T any](msgEncs ...MessageEncoder[T]) MessageEncoder[T]

Create a new message encoder that appends the results of running each message encoder sequentially. Useful, for example, to have a templating system / user message encoder, and a custom agent history message encoder after.

func NewTemplateMessageEncoder added in v0.7.0

func NewTemplateMessageEncoder[T any](systemTemplate, userTemplate string) MessageEncoder[T]

NewTemplateMessageEncoder creates a MessageEncoder that uses Go's text/template for formatting messages. It accepts templates for both system and user messages, allowing dynamic content insertion. The data parameter to BuildInputMessages should be a struct or map with fields accessible to the template. If either systemTemplate or userTemplate is an empty string, that message will be skipped.

type Model

type Model interface {
	// Responds to a set of input messages.
	Respond([]Message) (ModelResponse, error)
}

Model defines an interface to an LLM.

func NewCachedModel added in v0.7.0

func NewCachedModel(model Model, cache ModelResponseCache, opts ...CachedModelOpt) Model

NewCachedModel wraps a Model with response caching functionality. It stores responses in the provided ModelResponseCache implementation, returning cached results for identical input messages and salts to avoid redundant model calls.

func NewConcurrentLimitedModel added in v0.7.0

func NewConcurrentLimitedModel(model Model, limiter ConcurrentLimiter) Model

NewConcurrentLimitedModel wraps a Model with concurrency control. It ensures that only a limited number of concurrent calls can be made to the underlying model, using the provided ConcurrentLimiter to manage access.

func NewFakeReasoningModel

func NewFakeReasoningModel(reasoner Model, answerer Model, opts ...FakeReasoningModelOpt) Model

NewFakeReasoningModel creates a model that uses two underlying models to simulate reasoning. It first calls the reasoner model to generate reasoning about the input messages, then passes that reasoning along with the original messages to the answerer model. The reasoning is included as a ReasoningRole message in the auxiliary messages output. Optional parameters allow customization of the reasoning prompt.

func NewGeminiModel added in v0.8.0

func NewGeminiModel(key, modelName string, opts ...GeminiModelOpt) Model

NewGeminiModel creates a Model that uses the Google Gemini API. It requires an API key and model name, with optional configuration via variadic options.

func NewLoggingModel added in v0.7.0

func NewLoggingModel(model Model, logger ModelLogger) Model

NewLoggingModel wraps a Model with logging functionality. It logs all interactions with the model using the provided ModelLogger. Each model call is logged with input messages, output messages, usage statistics, and timing information.

func NewOpenAIModel added in v0.7.0

func NewOpenAIModel(key, modelName string, opts ...OpenAIModelOpt) Model

NewOpenAIModel creates a Model that uses the OpenAI API. It requires an API key and model name, with optional configuration via variadic options.

func NewRetryChainModel added in v0.8.2

func NewRetryChainModel(models []Model) Model

NewRetryChainModel creates a Model that tries a list of models in order, returning the result from the first one that doesn't fail. If all models fail, it returns a joined error containing all the errors.

func NewRetryModel

func NewRetryModel(model Model, maxRetries int, opts ...RetryModelOpt) Model

NewRetryModel wraps a Model with retry functionality. If the underlying model returns an error, this wrapper will retry the operation up to a configurable number of times with an optional delay between retries.

func NewUsageCountingModel added in v0.7.0

func NewUsageCountingModel(model Model, counter *UsageCounter) Model

NewUsageCountingModel wraps a Model with token usage tracking functionality. It aggregates token usage statistics in the provided UsageCounter, which allows monitoring total token consumption across multiple model calls.

type ModelLogger added in v0.7.0

type ModelLogger interface {
	ModelLog(ModelLoggingInfo) error
}

ModelLogger specifies a method of logging a call to a model.

func NewJsonModelLogger added in v0.7.0

func NewJsonModelLogger(to io.Writer) ModelLogger

NewJsonModelLogger creates a ModelLogger that outputs logs in JSON format. The logs are written to the provided io.Writer, with each log entry being a JSON object containing the model interaction details.

func NewSlogModelLogger added in v0.8.0

func NewSlogModelLogger(logFunc func(string, ...any), logMessages bool) ModelLogger

Logs calls made to the model to a slog-style logging function. Can optionally log the model messages too (this is very spammy).

type ModelLoggingInfo added in v0.6.0

type ModelLoggingInfo struct {
	Messages             []Message
	ResponseAuxMessages  []Message
	ResponseFinalMessage Message
	Usage                Usage
	Err                  error
	Duration             time.Duration
}

ModelLoggingInfo contains all information about a model interaction to be logged. It includes input messages, output messages, usage statistics, and any error that occurred.

type ModelResponse added in v0.8.0

type ModelResponse struct {
	// Extra messages that are not the final response,
	// but were used to build up the final response.
	// For example, reasoning messages.
	AuxilliaryMessages []Message
	// The primary repsponse to the users query.
	// Usually the only response that matters.
	PrimaryMessage Message
	// The usage of making this call.
	// This may be the sum of multiple LLM calls.
	Usage Usage
}

func (ModelResponse) IncludingUsage added in v0.8.0

func (r ModelResponse) IncludingUsage(u Usage) ModelResponse

Utility to include another usage object in this response object

func (ModelResponse) OnlyUsage added in v0.8.0

func (r ModelResponse) OnlyUsage() ModelResponse

Utility to allow you to return the usage but 0 value messages when an error occurs.

type ModelResponseCache added in v0.6.0

type ModelResponseCache interface {
	GetCachedResponse(salt string, inputs []Message) (bool, []Message, Message, error)
	SetCachedResponse(salt string, inputs []Message, aux []Message, out Message) error
}

func NewInMemoryCache added in v0.6.0

func NewInMemoryCache() ModelResponseCache

NewInMemoryCache creates an in-memory implementation of ModelResponseCache. It stores model responses in memory using a hash of the input messages as a key.

func NewSQLCache added in v0.7.3

func NewSQLCache(db *sql.DB) (ModelResponseCache, error)

type OpenAIModelOpt added in v0.8.0

type OpenAIModelOpt interface {
	// contains filtered or unexported methods
}

type ReasoningEffort

type ReasoningEffort uint8

ReasoningEffort defines how hard a reasoning model should think.

const (
	LowReasoning ReasoningEffort = iota
	MediumReasoning
	HighReasoning
)

type ResponseDecoder added in v0.6.0

type ResponseDecoder[T any] interface {
	ParseResponseText(string) (T, error)
}

ResponseDecoder converts an LLM response into a structured piece of data. When the LLM response is invalid, it should return ErrInvalidResponse (or an error joined on that).

func NewJsonResponseDecoder added in v0.7.0

func NewJsonResponseDecoder[T any]() ResponseDecoder[T]

NewJsonResponseDecoder creates a ResponseDecoder that tries to parse a json object from the response. It can ONLY parse json objects with an OBJECT as top level (i.e. it cannot parse a list directly).

func NewRawStringResponseDecoder added in v0.6.0

func NewRawStringResponseDecoder() ResponseDecoder[string]

NewRawStringResponseDecoder creates a ResponseDecoder that returns the response as a raw string without modification.

func NewSubstringResponseDecoder added in v0.8.0

func NewSubstringResponseDecoder[T any](decoder ResponseDecoder[T], substring func(string) (string, error)) ResponseDecoder[T]

Wrap an existing response decoder with one that takes only the part of interest of the response into account. The part of interest is determined by the substring function. If an error is detected when getting the substring, ErrInvalidResponse is raised.

func NewValidatingResponseDecoder added in v0.8.0

func NewValidatingResponseDecoder[T any](decoder ResponseDecoder[T], validate func(T) error) ResponseDecoder[T]

Creates a response decoder that wraps the provided one, but then performs an extra validation step on the parsed response. If an error is found during validation, the error is wrapped with ErrInvalidResponse and returned.

type RetryModelOpt added in v0.8.0

type RetryModelOpt interface {
	// contains filtered or unexported methods
}

type Role

type Role uint8

Role is an enum specifying a role for a message. It is not 1:1 with openai roles (i.e. there is a reasoning role here).

const (
	SystemRole Role = iota
	UserRole
	AssistantRole
	ReasoningRole
)

func (Role) String added in v0.6.0

func (r Role) String() string

type Usage

type Usage struct {
	InputTokens     int
	OutputTokens    int
	SuccessfulCalls int
	FailedCalls     int
}

Usage defines how many tokens were used when making calls to LLMs.

func (Usage) Add

func (u Usage) Add(u2 Usage) Usage

type UsageCounter added in v0.6.0

type UsageCounter struct {
	// contains filtered or unexported fields
}

Counts up the sum usage. Is completely concurrent-safe.

func NewUsageCounter added in v0.6.0

func NewUsageCounter() *UsageCounter

NewUsageCounter creates a new UsageCounter with zero initial usage. The counter is safe for concurrent use across multiple goroutines.

func (*UsageCounter) Add added in v0.6.0

func (u *UsageCounter) Add(usage Usage)

Add the given usage to the counter.

func (*UsageCounter) Get added in v0.6.0

func (u *UsageCounter) Get() Usage

Get the current usage in the counter.

type Verbosity added in v0.8.0

type Verbosity uint8
const (
	LowVerbosity Verbosity = iota
	MediumVerbosity
	HighVerbosity
)

type WithDelay added in v0.7.0

type WithDelay struct{ X time.Duration }

type WithHTTPHeader added in v0.7.0

type WithHTTPHeader struct {
	K string
	V string
}

type WithJsonSchema added in v0.8.0

type WithJsonSchema struct{ X map[string]any }

type WithMaxOutputTokens added in v0.8.0

type WithMaxOutputTokens struct{ X int }

type WithMessagePrefix added in v0.8.0

type WithMessagePrefix struct{ X string }

type WithPrediction added in v0.8.0

type WithPrediction struct{ X string }

type WithPresencePenalty added in v0.8.0

type WithPresencePenalty struct{ X float64 }

type WithReasoningAs added in v0.8.0

type WithReasoningAs struct {
	X                Role
	TransformContent func(string) string
}

type WithReasoningEffort added in v0.7.0

type WithReasoningEffort struct{ X ReasoningEffort }

type WithReasoningPrompt added in v0.7.0

type WithReasoningPrompt struct{ X string }

type WithSalt added in v0.8.2

type WithSalt struct{ X string }

type WithSystemAs added in v0.8.0

type WithSystemAs struct {
	X                Role
	TransformContent func(string) string
}

type WithTemperature added in v0.7.0

type WithTemperature struct{ X float64 }

type WithTimeout added in v0.8.2

type WithTimeout struct{ X time.Duration }

type WithTopP added in v0.8.0

type WithTopP struct{ X int }

type WithURL added in v0.7.0

type WithURL struct{ X string }

type WithVerbosity added in v0.8.0

type WithVerbosity struct{ X Verbosity }

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL