embeddings

package
v0.3.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 8, 2026 License: MIT Imports: 10 Imported by: 0

README

Embeddings Package

The embeddings package provides a unified interface for generating text embeddings from various providers. Embeddings are numerical representations of text that capture semantic meaning, enabling similarity search and retrieval-augmented generation (RAG).

Table of Contents

Overview

Embeddings transform text into high-dimensional vectors that capture semantic relationships. Similar texts produce similar vectors, enabling:

  • Semantic Search: Find documents by meaning, not just keywords
  • Similarity Matching: Compare documents for relevance
  • Clustering: Group related content
  • RAG Systems: Retrieve relevant context for LLM prompts

This package abstracts embedding generation across providers, allowing you to switch between OpenAI, HuggingFace, and self-hosted models without changing your application code.

Features

  • Multiple Providers: OpenAI, HuggingFace Inference API, HuggingFace TEI (self-hosted)
  • Batch Processing: Efficiently generate embeddings for multiple texts
  • Provider Agnostic: Switch providers using configuration
  • Cost Optimization: Use free HuggingFace models or pay-as-you-go OpenAI
  • Automatic Dimensions: Provider reports embedding dimensions
  • Extensible: Easy to add custom embedding providers via registry

Installation

go get github.com/aixgo-dev/aixgo/pkg/embeddings

No additional dependencies required for the base package. Each provider handles its own HTTP client.

Quick Start

Using HuggingFace (Free)
package main

import (
    "context"
    "fmt"
    "log"

    "github.com/aixgo-dev/aixgo/pkg/embeddings"
)

func main() {
    // Configure HuggingFace embeddings (no API key needed for public models)
    config := embeddings.Config{
        Provider: "huggingface",
        HuggingFace: &embeddings.HuggingFaceConfig{
            Model: "sentence-transformers/all-MiniLM-L6-v2",
            WaitForModel: true,
            UseCache: true,
        },
    }

    // Create embedding service
    svc, err := embeddings.New(config)
    if err != nil {
        log.Fatal(err)
    }
    defer svc.Close()

    // Generate embedding
    ctx := context.Background()
    embedding, err := svc.Embed(ctx, "Aixgo is an AI agent framework for Go")
    if err != nil {
        log.Fatal(err)
    }

    fmt.Printf("Generated embedding with %d dimensions\n", len(embedding))
    fmt.Printf("Model: %s\n", svc.ModelName())
    fmt.Printf("Dimensions: %d\n", svc.Dimensions())
}
Using OpenAI
config := embeddings.Config{
    Provider: "openai",
    OpenAI: &embeddings.OpenAIConfig{
        APIKey: "sk-...", // Or use env: os.Getenv("OPENAI_API_KEY")
        Model:  "text-embedding-3-small",
    },
}

svc, err := embeddings.New(config)
if err != nil {
    log.Fatal(err)
}
defer svc.Close()
Batch Processing
texts := []string{
    "First document",
    "Second document",
    "Third document",
}

embeddings, err := svc.EmbedBatch(ctx, texts)
if err != nil {
    log.Fatal(err)
}

fmt.Printf("Generated %d embeddings\n", len(embeddings))

Supported Providers

Comparison Table
Provider Cost Setup Speed Quality Dimensions Best For
HuggingFace API Free None Medium Good-Excellent 384-1024 Dev
HuggingFace TEI Free (self-host) Docker Very Fast Good-Excellent 384-1024 Prod
OpenAI $0.02-0.13/1M API Key Fast Excellent 1536-3072 Prod
HuggingFace Inference API

Pros:

  • Completely free for public models
  • No setup required
  • Access to 100+ models
  • Automatic model loading

Cons:

  • Rate limited without API key
  • Cold start delays
  • Network latency

Popular Models:

Model Dimensions Speed Quality Use Case
sentence-transformers/all-MiniLM-L6-v2 384 Very Fast Good Development, general purpose
BAAI/bge-small-en-v1.5 384 Fast Good Efficient search
BAAI/bge-large-en-v1.5 1024 Medium Excellent Production quality
thenlper/gte-large 1024 Medium Excellent Multilingual
intfloat/e5-large-v2 1024 Medium Excellent Retrieval tasks
HuggingFace TEI (Text Embeddings Inference)

Self-hosted embedding server optimized for performance.

Pros:

  • Very fast (GPU acceleration)
  • No rate limits
  • Complete control
  • Batch optimization

Cons:

  • Requires deployment
  • Hardware costs
  • Maintenance overhead

Setup:

# Run TEI server with Docker
docker run -d \
  --name tei \
  -p 8080:8080 \
  -v $PWD/data:/data \
  --pull always \
  ghcr.io/huggingface/text-embeddings-inference:latest \
  --model-id sentence-transformers/all-MiniLM-L6-v2
OpenAI

Pros:

  • State-of-the-art quality
  • Reliable API
  • Fast response times
  • No infrastructure

Cons:

  • Costs money
  • API key required
  • Network dependency

Pricing (as of 2025):

  • text-embedding-3-small (1536 dims): $0.02 per 1M tokens
  • text-embedding-3-large (3072 dims): $0.13 per 1M tokens

API Reference

EmbeddingService Interface
type EmbeddingService interface {
    // Embed generates embeddings for a single text
    Embed(ctx context.Context, text string) ([]float32, error)

    // EmbedBatch generates embeddings for multiple texts
    EmbedBatch(ctx context.Context, texts []string) ([][]float32, error)

    // Dimensions returns the dimension size of the embeddings
    Dimensions() int

    // ModelName returns the name of the embedding model
    ModelName() string

    // Close closes any resources held by the service
    Close() error
}
Configuration Structures
OpenAI Config
type OpenAIConfig struct {
    APIKey     string // Required: Your OpenAI API key
    Model      string // Required: Model name (e.g., "text-embedding-3-small")
    BaseURL    string // Optional: Custom endpoint (default: https://api.openai.com/v1)
    Dimensions int    // Optional: Reduce dimensions (text-embedding-3 only)
}
HuggingFace Config
type HuggingFaceConfig struct {
    APIKey       string // Optional: For higher rate limits
    Model        string // Required: Model ID from HuggingFace Hub
    Endpoint     string // Optional: Custom endpoint (default: https://api-inference.huggingface.co)
    WaitForModel bool   // Optional: Wait if model is loading (default: false)
    UseCache     bool   // Optional: Use cached results (default: false)
}
HuggingFace TEI Config
type HuggingFaceTEIConfig struct {
    Endpoint  string // Required: TEI server URL (e.g., "http://localhost:8080")
    Model     string // Optional: Model name (informational only)
    Normalize bool   // Optional: Return normalized embeddings (default: false)
}

Configuration

Environment Variables
# OpenAI
export OPENAI_API_KEY=sk-...

# HuggingFace (optional, for higher rate limits)
export HUGGINGFACE_API_KEY=hf_...
YAML Configuration
embeddings:
  provider: huggingface
  huggingface:
    model: sentence-transformers/all-MiniLM-L6-v2
    wait_for_model: true
    use_cache: true
    api_key: ${HUGGINGFACE_API_KEY}  # Optional
Programmatic Configuration
config := embeddings.Config{
    Provider: "openai",
    OpenAI: &embeddings.OpenAIConfig{
        APIKey: os.Getenv("OPENAI_API_KEY"),
        Model:  "text-embedding-3-small",
    },
}

Best Practices

1. Choose the Right Provider
// Development/Testing: Use HuggingFace (free)
config.Provider = "huggingface"
config.HuggingFace = &embeddings.HuggingFaceConfig{
    Model: "sentence-transformers/all-MiniLM-L6-v2",
}

// Production (Cost-Sensitive): Use HuggingFace TEI (self-hosted)
config.Provider = "huggingface_tei"
config.HuggingFaceTEI = &embeddings.HuggingFaceTEIConfig{
    Endpoint: "http://tei-service:8080",
}

// Production (Quality-Focused): Use OpenAI
config.Provider = "openai"
config.OpenAI = &embeddings.OpenAIConfig{
    APIKey: os.Getenv("OPENAI_API_KEY"),
    Model:  "text-embedding-3-small",
}
2. Use Batch Processing
// Good: Batch processing (more efficient)
embeddings, err := svc.EmbedBatch(ctx, []string{
    "Document 1",
    "Document 2",
    "Document 3",
})

// Avoid: Individual calls in a loop
for _, text := range texts {
    emb, err := svc.Embed(ctx, text) // Inefficient
}
3. Handle Context and Timeouts
// Set appropriate timeout
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()

embedding, err := svc.Embed(ctx, text)
if err != nil {
    if errors.Is(err, context.DeadlineExceeded) {
        log.Printf("Embedding generation timeout")
    }
    return err
}
4. Validate Input
func validateText(text string) error {
    if text == "" {
        return fmt.Errorf("text cannot be empty")
    }
    if len(text) > 8192 {
        return fmt.Errorf("text too long: %d chars (max 8192)", len(text))
    }
    return nil
}

if err := validateText(text); err != nil {
    return nil, err
}
5. Reuse Service Instances
// Good: Reuse service instance
var embeddingService embeddings.EmbeddingService

func init() {
    var err error
    embeddingService, err = embeddings.New(config)
    if err != nil {
        log.Fatal(err)
    }
}

func getEmbedding(text string) ([]float32, error) {
    return embeddingService.Embed(context.Background(), text)
}

// Avoid: Creating new service for each call
func getEmbedding(text string) ([]float32, error) {
    svc, _ := embeddings.New(config) // Inefficient
    defer svc.Close()
    return svc.Embed(context.Background(), text)
}
6. Error Handling and Retries
func embedWithRetry(svc embeddings.EmbeddingService, text string, maxRetries int) ([]float32, error) {
    var lastErr error
    for i := 0; i < maxRetries; i++ {
        ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
        defer cancel()

        embedding, err := svc.Embed(ctx, text)
        if err == nil {
            return embedding, nil
        }

        lastErr = err
        if !isRetryable(err) {
            break
        }

        // Exponential backoff
        time.Sleep(time.Duration(1<<uint(i)) * time.Second)
    }
    return nil, fmt.Errorf("failed after %d retries: %w", maxRetries, lastErr)
}

func isRetryable(err error) bool {
    // Retry on network errors, rate limits, timeouts
    return errors.Is(err, context.DeadlineExceeded) ||
           strings.Contains(err.Error(), "rate limit") ||
           strings.Contains(err.Error(), "timeout")
}

Troubleshooting

HuggingFace Rate Limit
Error: rate limit exceeded

Solutions:

  1. Get a free API key from HuggingFace:
config.HuggingFace.APIKey = "hf_..."
  1. Use batch processing to reduce requests
  2. Deploy your own TEI server
OpenAI Authentication Error
Error: OpenAI API error: invalid API key

Solution: Verify your API key:

# Test your API key
curl https://api.openai.com/v1/models \
  -H "Authorization: Bearer $OPENAI_API_KEY"
Model Loading Timeout
Error: model is currently loading

Solution: Enable wait for model:

config.HuggingFace.WaitForModel = true
Dimension Mismatch

If you're getting different dimensions than expected:

// Check actual dimensions
fmt.Printf("Expected: %d, Got: %d\n", expectedDims, svc.Dimensions())

// Common dimensions by model:
// - all-MiniLM-L6-v2: 384
// - bge-large-en-v1.5: 1024
// - text-embedding-3-small: 1536
// - text-embedding-3-large: 3072
TEI Connection Error
Error: failed to connect to TEI server

Solution: Verify TEI is running:

# Check if TEI is accessible
curl http://localhost:8080/health

# View TEI logs
docker logs tei

Examples

// Generate embeddings for documents
documents := []string{
    "Aixgo is a production-grade AI framework",
    "Go is a programming language created by Google",
    "Machine learning enables computers to learn from data",
}

docEmbeddings, err := svc.EmbedBatch(ctx, documents)
if err != nil {
    log.Fatal(err)
}

// Generate query embedding
query := "What is Aixgo?"
queryEmbedding, err := svc.Embed(ctx, query)
if err != nil {
    log.Fatal(err)
}

// Find most similar document (using cosine similarity)
bestIdx := 0
bestScore := float32(0.0)
for i, docEmb := range docEmbeddings {
    score := cosineSimilarity(queryEmbedding, docEmb)
    if score > bestScore {
        bestScore = score
        bestIdx = i
    }
}

fmt.Printf("Best match (%.2f): %s\n", bestScore, documents[bestIdx])
Example 2: Caching Embeddings
type EmbeddingCache struct {
    cache map[string][]float32
    mu    sync.RWMutex
    svc   embeddings.EmbeddingService
}

func (ec *EmbeddingCache) GetEmbedding(ctx context.Context, text string) ([]float32, error) {
    // Check cache first
    ec.mu.RLock()
    if emb, ok := ec.cache[text]; ok {
        ec.mu.RUnlock()
        return emb, nil
    }
    ec.mu.RUnlock()

    // Generate embedding
    emb, err := ec.svc.Embed(ctx, text)
    if err != nil {
        return nil, err
    }

    // Cache result
    ec.mu.Lock()
    ec.cache[text] = emb
    ec.mu.Unlock()

    return emb, nil
}
Example 3: Custom Provider
package main

import "github.com/aixgo-dev/aixgo/pkg/embeddings"

func init() {
    // Register custom provider
    embeddings.Register("custom", func(config embeddings.Config) (embeddings.EmbeddingService, error) {
        return NewCustomEmbeddings(config)
    })
}

type CustomEmbeddings struct {
    model string
    dims  int
}

func (c *CustomEmbeddings) Embed(ctx context.Context, text string) ([]float32, error) {
    // Your implementation
    return make([]float32, c.dims), nil
}

func (c *CustomEmbeddings) EmbedBatch(ctx context.Context, texts []string) ([][]float32, error) {
    result := make([][]float32, len(texts))
    for i, text := range texts {
        emb, err := c.Embed(ctx, text)
        if err != nil {
            return nil, err
        }
        result[i] = emb
    }
    return result, nil
}

func (c *CustomEmbeddings) Dimensions() int      { return c.dims }
func (c *CustomEmbeddings) ModelName() string    { return c.model }
func (c *CustomEmbeddings) Close() error         { return nil }
Example 4: Provider Comparison
func compareProviders(text string) {
    providers := []embeddings.Config{
        {
            Provider: "huggingface",
            HuggingFace: &embeddings.HuggingFaceConfig{
                Model: "sentence-transformers/all-MiniLM-L6-v2",
            },
        },
        {
            Provider: "openai",
            OpenAI: &embeddings.OpenAIConfig{
                APIKey: os.Getenv("OPENAI_API_KEY"),
                Model:  "text-embedding-3-small",
            },
        },
    }

    for _, config := range providers {
        svc, err := embeddings.New(config)
        if err != nil {
            continue
        }
        defer svc.Close()

        start := time.Now()
        emb, err := svc.Embed(context.Background(), text)
        duration := time.Since(start)

        if err != nil {
            fmt.Printf("%s: ERROR - %v\n", config.Provider, err)
            continue
        }

        fmt.Printf("%s: %dms, %d dimensions\n",
            config.Provider, duration.Milliseconds(), len(emb))
    }
}

Performance Considerations

Latency Comparison
Provider Latency (avg) Throughput
HuggingFace API 200-500ms Low
HuggingFace TEI 10-50ms Very High
OpenAI 100-300ms High
Optimization Tips
  1. Use batch operations when processing multiple texts
  2. Cache embeddings for frequently used content
  3. Deploy TEI locally for production workloads
  4. Choose smaller models (384 dims) if quality allows
  5. Set appropriate timeouts to prevent hanging requests
Cost Analysis

Example: 1 million documents, 100 tokens each

Provider Cost Setup Ongoing
HuggingFace API $0 $0 $0
HuggingFace TEI ~$100/month $500 $100/month (GPU VM)
OpenAI (text-embedding-3-small) $2,000 $0 Pay per use

Model Selection Guide

For Development

Use HuggingFace API with sentence-transformers/all-MiniLM-L6-v2:

  • Free, no setup
  • 384 dimensions (fast)
  • Good quality for testing
For Production: Cost-Sensitive

Use HuggingFace TEI with BAAI/bge-large-en-v1.5:

  • Self-hosted (one-time setup)
  • 1024 dimensions (excellent quality)
  • No ongoing API costs
For Production: Quality-Focused

Use OpenAI with text-embedding-3-small:

  • Best-in-class quality
  • 1536 dimensions
  • Managed service
  • $0.02 per 1M tokens

Next Steps

Resources

Contributing

To add a new embedding provider:

  1. Implement the EmbeddingService interface
  2. Register your provider in init()
  3. Add configuration structs to the Config type
  4. Add tests and documentation
  5. Submit a pull request

See CONTRIBUTING.md for details.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func IsRegistered

func IsRegistered(name string) bool

IsRegistered checks if a provider is registered.

func ListProviders

func ListProviders() []string

ListProviders returns a list of all registered embedding providers.

func Register

func Register(name string, factory ProviderFactory)

Register adds a new embedding provider to the registry.

Types

type Config

type Config struct {
	// Provider specifies which embedding service to use
	// Supported values: "openai", "huggingface", "huggingface_tei"
	Provider string `yaml:"provider" json:"provider"`

	// OpenAI-specific configuration
	OpenAI *OpenAIConfig `yaml:"openai,omitempty" json:"openai,omitempty"`

	// HuggingFace-specific configuration
	HuggingFace *HuggingFaceConfig `yaml:"huggingface,omitempty" json:"huggingface,omitempty"`

	// HuggingFaceTEI-specific configuration (Text Embeddings Inference)
	HuggingFaceTEI *HuggingFaceTEIConfig `yaml:"huggingface_tei,omitempty" json:"huggingface_tei,omitempty"`
}

Config holds configuration for embedding providers.

func (*Config) Validate

func (c *Config) Validate() error

Validate checks if the configuration is valid.

type EmbeddingService

type EmbeddingService interface {
	// Embed generates embeddings for a single text
	Embed(ctx context.Context, text string) ([]float32, error)

	// EmbedBatch generates embeddings for multiple texts
	EmbedBatch(ctx context.Context, texts []string) ([][]float32, error)

	// Dimensions returns the dimension size of the embeddings
	Dimensions() int

	// ModelName returns the name of the embedding model
	ModelName() string

	// Close closes any resources held by the service
	Close() error
}

EmbeddingService is the main interface for generating text embeddings.

func New

func New(config Config) (EmbeddingService, error)

New creates a new EmbeddingService based on the provider specified in the config.

func NewHuggingFace

func NewHuggingFace(config Config) (EmbeddingService, error)

NewHuggingFace creates a new HuggingFaceEmbeddings instance.

func NewHuggingFaceTEI

func NewHuggingFaceTEI(config Config) (EmbeddingService, error)

NewHuggingFaceTEI creates a new HuggingFaceTEIEmbeddings instance.

func NewOpenAI

func NewOpenAI(config Config) (EmbeddingService, error)

NewOpenAI creates a new OpenAIEmbeddings instance.

type HuggingFaceConfig

type HuggingFaceConfig struct {
	// APIKey for authentication (optional for public models)
	APIKey string `yaml:"api_key,omitempty" json:"api_key,omitempty"`

	// Model specifies which HuggingFace model to use
	// Popular options:
	//   - "sentence-transformers/all-MiniLM-L6-v2" (384 dims, fast)
	//   - "BAAI/bge-small-en-v1.5" (384 dims)
	//   - "BAAI/bge-large-en-v1.5" (1024 dims)
	//   - "thenlper/gte-large" (1024 dims)
	Model string `yaml:"model" json:"model"`

	// Endpoint is the API endpoint (default: https://api-inference.huggingface.co)
	Endpoint string `yaml:"endpoint,omitempty" json:"endpoint,omitempty"`

	// WaitForModel waits if model is loading (default: true)
	WaitForModel bool `yaml:"wait_for_model" json:"wait_for_model"`

	// UseCache uses cached results (default: true)
	UseCache bool `yaml:"use_cache" json:"use_cache"`
}

HuggingFaceConfig contains HuggingFace Inference API settings.

func (*HuggingFaceConfig) Validate

func (hc *HuggingFaceConfig) Validate() error

Validate checks if HuggingFace configuration is valid.

type HuggingFaceEmbeddings

type HuggingFaceEmbeddings struct {
	// contains filtered or unexported fields
}

HuggingFaceEmbeddings implements EmbeddingService using HuggingFace Inference API.

func (*HuggingFaceEmbeddings) Close

func (h *HuggingFaceEmbeddings) Close() error

Close closes any resources held by the service.

func (*HuggingFaceEmbeddings) Dimensions

func (h *HuggingFaceEmbeddings) Dimensions() int

Dimensions returns the dimension size of the embeddings.

func (*HuggingFaceEmbeddings) Embed

func (h *HuggingFaceEmbeddings) Embed(ctx context.Context, text string) ([]float32, error)

Embed generates embeddings for a single text.

func (*HuggingFaceEmbeddings) EmbedBatch

func (h *HuggingFaceEmbeddings) EmbedBatch(ctx context.Context, texts []string) ([][]float32, error)

EmbedBatch generates embeddings for multiple texts.

func (*HuggingFaceEmbeddings) ModelName

func (h *HuggingFaceEmbeddings) ModelName() string

ModelName returns the name of the embedding model.

type HuggingFaceTEIConfig

type HuggingFaceTEIConfig struct {
	// Endpoint is the TEI server URL (e.g., "http://localhost:8080")
	Endpoint string `yaml:"endpoint" json:"endpoint"`

	// Model name (informational, server determines actual model)
	Model string `yaml:"model,omitempty" json:"model,omitempty"`

	// Normalize returns normalized embeddings (default: true)
	Normalize bool `yaml:"normalize" json:"normalize"`
}

HuggingFaceTEIConfig contains HuggingFace Text Embeddings Inference settings. TEI is a self-hosted, high-performance embedding server.

func (*HuggingFaceTEIConfig) Validate

func (tc *HuggingFaceTEIConfig) Validate() error

Validate checks if HuggingFaceTEI configuration is valid.

type HuggingFaceTEIEmbeddings

type HuggingFaceTEIEmbeddings struct {
	// contains filtered or unexported fields
}

HuggingFaceTEIEmbeddings implements EmbeddingService using HuggingFace Text Embeddings Inference (TEI). TEI is a self-hosted, high-performance embedding server optimized for production use. See: https://github.com/huggingface/text-embeddings-inference

func (*HuggingFaceTEIEmbeddings) Close

func (t *HuggingFaceTEIEmbeddings) Close() error

Close closes any resources held by the service.

func (*HuggingFaceTEIEmbeddings) Dimensions

func (t *HuggingFaceTEIEmbeddings) Dimensions() int

Dimensions returns the dimension size of the embeddings.

func (*HuggingFaceTEIEmbeddings) Embed

func (t *HuggingFaceTEIEmbeddings) Embed(ctx context.Context, text string) ([]float32, error)

Embed generates embeddings for a single text.

func (*HuggingFaceTEIEmbeddings) EmbedBatch

func (t *HuggingFaceTEIEmbeddings) EmbedBatch(ctx context.Context, texts []string) ([][]float32, error)

EmbedBatch generates embeddings for multiple texts.

func (*HuggingFaceTEIEmbeddings) ModelName

func (t *HuggingFaceTEIEmbeddings) ModelName() string

ModelName returns the name of the embedding model.

type OpenAIConfig

type OpenAIConfig struct {
	// APIKey for authentication
	APIKey string `yaml:"api_key" json:"api_key"`

	// Model specifies which OpenAI embedding model to use
	// Options: "text-embedding-3-small" (1536 dims), "text-embedding-3-large" (3072 dims)
	Model string `yaml:"model" json:"model"`

	// BaseURL is the API endpoint (default: https://api.openai.com/v1)
	BaseURL string `yaml:"base_url,omitempty" json:"base_url,omitempty"`

	// Dimensions allows reducing embedding dimensions (only for text-embedding-3 models)
	Dimensions int `yaml:"dimensions,omitempty" json:"dimensions,omitempty"`
}

OpenAIConfig contains OpenAI-specific embedding settings.

func (*OpenAIConfig) Validate

func (oc *OpenAIConfig) Validate() error

Validate checks if OpenAI configuration is valid.

type OpenAIEmbeddings

type OpenAIEmbeddings struct {
	// contains filtered or unexported fields
}

OpenAIEmbeddings implements EmbeddingService using OpenAI's API.

func (*OpenAIEmbeddings) Close

func (o *OpenAIEmbeddings) Close() error

Close closes any resources held by the service.

func (*OpenAIEmbeddings) Dimensions

func (o *OpenAIEmbeddings) Dimensions() int

Dimensions returns the dimension size of the embeddings.

func (*OpenAIEmbeddings) Embed

func (o *OpenAIEmbeddings) Embed(ctx context.Context, text string) ([]float32, error)

Embed generates embeddings for a single text.

func (*OpenAIEmbeddings) EmbedBatch

func (o *OpenAIEmbeddings) EmbedBatch(ctx context.Context, texts []string) ([][]float32, error)

EmbedBatch generates embeddings for multiple texts.

func (*OpenAIEmbeddings) ModelName

func (o *OpenAIEmbeddings) ModelName() string

ModelName returns the name of the embedding model.

type ProviderFactory

type ProviderFactory func(config Config) (EmbeddingService, error)

ProviderFactory is a function that creates an EmbeddingService from a Config.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL