aio

package
v0.11.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 18, 2026 License: MIT Imports: 10 Imported by: 2

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Cache

type Cache struct {
	chatter.Chatter
	// contains filtered or unexported fields
}

Caching strategy for LLMs I/O

func NewCache

func NewCache(cache KeyVal, chatter chatter.Chatter) *Cache

Creates read-through caching layer for LLM client.

Use github.com/akrylysov/pogreb to cache chatter on local file systems:

llm, err := /* create LLM client */
db, err := pogreb.Open("llm.cache", nil)
text := aio.NewCache(db, llm)

func (*Cache) HashKey

func (c *Cache) HashKey(prompt string) []byte

func (*Cache) Prompt

func (c *Cache) Prompt(ctx context.Context, prompt []chatter.Message, opts ...chatter.Opt) (*chatter.Reply, error)

type Embedder added in v0.10.0

type Embedder struct {
	chatter.Chatter
}

Embedding is a wrapper for LLMs that support embeddings. It provides simple interface to get embeddings vectors for text.

func NewEmbedder added in v0.10.0

func NewEmbedder(chatter chatter.Chatter) *Embedder

func (*Embedder) Embedding added in v0.10.0

func (api *Embedder) Embedding(ctx context.Context, text string) ([]float32, int, error)

type Eraser added in v0.10.0

type Eraser interface{ Delete([]byte) error }

type Getter

type Getter interface{ Get([]byte) ([]byte, error) }

Getter interface abstract storage

type KeyVal

type KeyVal interface {
	Getter
	Putter
	Eraser
}

KeyVal interface

type Limiter

type Limiter struct {
	chatter.Chatter
	// contains filtered or unexported fields
}

Rate limit startegy for LLMs I/O

func NewLimiter

func NewLimiter(requestPerMin int, tokensPerMin int, chatter chatter.Chatter) *Limiter

Create rate limit strategy for LLMs. It defines per minute policy for requests and tokens.

func (*Limiter) Prompt

func (c *Limiter) Prompt(ctx context.Context, prompt []chatter.Message, opts ...chatter.Opt) (*chatter.Reply, error)

type Logger added in v0.5.1

type Logger struct {
	chatter.Chatter
	// contains filtered or unexported fields
}

Logger of LLM's I/O

func NewJsonLogger added in v0.7.0

func NewJsonLogger(w io.Writer, chatter chatter.Chatter) *Logger

func NewTextLogger added in v0.7.0

func NewTextLogger(w io.Writer, chatter chatter.Chatter) *Logger

func (*Logger) Prompt added in v0.5.1

func (deb *Logger) Prompt(ctx context.Context, seq []chatter.Message, opt ...chatter.Opt) (*chatter.Reply, error)

type Putter

type Putter interface{ Put([]byte, []byte) error }

Setter interface abstract storage

type Quota added in v0.7.0

type Quota struct {
	chatter.Chatter
	// contains filtered or unexported fields
}

Quoting strategy for LLM I/O

func NewQuota added in v0.7.0

func NewQuota(maxEpoch int, maxUsage chatter.Usage, chatter chatter.Chatter) *Quota

func (*Quota) Prompt added in v0.7.0

func (q *Quota) Prompt(ctx context.Context, prompt []chatter.Message, opts ...chatter.Opt) (*chatter.Reply, error)

func (*Quota) ResetQuota added in v0.7.0

func (q *Quota) ResetQuota()

type Route

type Route string

Chatter interface option allowing to dynamically route prompts to choosen models

func (Route) ChatterOpt

func (Route) ChatterOpt()

type Router

type Router struct {
	// contains filtered or unexported fields
}

Dynamic routing strategy throught pool of LLMs. The LLMs pool consists of default "route" and multiple named models.

func NewRouter

func NewRouter(llms map[string]chatter.Chatter, fallback chatter.Chatter) *Router

Creates LLMs pools instance

func (*Router) Prompt

func (p *Router) Prompt(ctx context.Context, prompt []chatter.Message, opts ...chatter.Opt) (*chatter.Reply, error)

func (*Router) Usage added in v0.7.0

func (p *Router) Usage() chatter.Usage

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL