llm

package
v1.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 22, 2026 License: MIT Imports: 10 Imported by: 0

README

LLM layer

Shared package for calling LLM backends (OpenAI, Ollama, Gemini). Used by Explore AI and the AI response module (-modules ai). Any new module that needs to call an LLM can use this package instead of duplicating API logic.

API

  • llm.Provideropenai, ollama, gemini
  • llm.Config – Provider, Endpoint, Model, MaxTokens, Timeout (optional overrides)
  • llm.Message – Role + Content (for chat)
  • llm.GetAPIKey(provider) – Resolves API key from env; returns ErrMissingAPIKey when required but not set
  • llm.Call(ctx, cfg, apiKey, messages) – Sends messages and returns the assistant’s reply text
  • llm.IsMissingKeyError(err) – True if the error is “API key not set”

Example: new module that uses the LLM

Copy-paste skeleton. Add your config fields to modules.Config and wire flags in internal/config as needed.

package modules

import (
	"context"
	"strings"

	"github.com/Proviesec/PSFuzz/internal/llm"
)

type MyLLMAnalyzer struct {
	Prompt    string
	Provider  string
	Endpoint  string
	Model     string
	MaxTokens int
}

func init() {
	Register("myllm", func(c *Config) Analyzer {
		return MyLLMAnalyzer{
			Prompt:    c.MyLLMPrompt,
			Provider:  c.MyLLMProvider,
			Endpoint:  c.MyLLMEndpoint,
			Model:     c.MyLLMModel,
			MaxTokens: c.MyLLMMaxTokens,
		}
	})
}

func (MyLLMAnalyzer) Name() string { return "myllm" }

func (a MyLLMAnalyzer) Analyze(ctx context.Context, in Input) (Output, error) {
	p := llm.NormalizeProviderFromString(a.Provider)
	apiKey, err := llm.GetAPIKey(p)
	if err != nil {
		return Output{Data: map[string]any{"skipped": err.Error()}}, nil
	}
	maxTokens := a.MaxTokens
	if maxTokens <= 0 {
		maxTokens = 200
	}
	cfg := llm.Config{
		Provider:  p,
		Endpoint:  a.Endpoint,
		Model:     a.Model,
		MaxTokens: maxTokens,
	}
	content, err := llm.Call(ctx, cfg, apiKey, []llm.Message{
		{Role: "user", Content: a.Prompt + "\n\n" + in.Body},
	})
	if err != nil {
		return Output{Data: map[string]any{"error": "api", "message": err.Error()}}, nil
	}
	return Output{Data: map[string]any{"result": strings.TrimSpace(content)}}, nil
}

Then in internal/modules/config.go add your fields (e.g. MyLLMPrompt, MyLLMProvider, …), and in internal/config add the corresponding CLI flags and file config so they are applied to cfg.ModuleConfig.

See internal/modules/ai.go for the full AI module implementation.

Documentation

Overview

Package llm provides a shared layer for calling LLM backends (OpenAI, Ollama, Gemini). Use it from Explore AI, the AI response module, or any new module that needs to call an LLM.

Example (new module or one-off call):

cfg := llm.Config{
    Provider:  llm.ProviderOpenAI,
    MaxTokens: 150,
}
key, err := llm.GetAPIKey(cfg.Provider)
if err != nil {
    return err // or report "skipped: " + err.Error()
}
content, err := llm.Call(ctx, cfg, key, []llm.Message{{Role: "user", Content: prompt}})

Index

Constants

View Source
const (
	DefaultMaxTokensOpenAI = 500
	DefaultMaxTokensOllama = 500
	DefaultMaxTokensGemini = 500
)

Default max tokens per provider when Config.MaxTokens <= 0.

View Source
const DefaultTimeout = 20 * time.Second

DefaultTimeout is the default timeout for API calls.

Variables

View Source
var ErrMissingAPIKey = errors.New("llm: API key required but not set")

ErrMissingAPIKey is returned by GetAPIKey when the provider requires an API key but none is set.

Functions

func Call

func Call(ctx context.Context, cfg Config, apiKey string, messages []Message) (string, error)

Call sends the messages to the configured provider and returns the assistant's reply content. apiKey must be set for OpenAI and Gemini; for Ollama it can be empty.

func GetAPIKey

func GetAPIKey(p Provider) (string, error)

GetAPIKey returns the API key for the given provider from the environment. For OpenAI: OPENAI_API_KEY; for Gemini: GEMINI_API_KEY or GOOGLE_API_KEY; for Ollama: OLLAMA_API_KEY (optional). Returns ErrMissingAPIKey when the provider requires a key and it is not set.

func IsMissingKeyError

func IsMissingKeyError(err error) bool

IsMissingKeyError returns true if err indicates the API key was not set.

Types

type Config

type Config struct {
	Provider  Provider
	Endpoint  string // optional; e.g. http://localhost:11434 for Ollama
	Model     string // optional; e.g. gpt-4o-mini, llama3.1, gemini-1.5-flash
	MaxTokens int    // max tokens to generate; if <= 0, provider default is used
	Timeout   time.Duration
}

Config configures an LLM call. Endpoint and Model can be empty to use built-in defaults.

type Message

type Message struct {
	Role    string // "user", "assistant", or "model" (Gemini)
	Content string
}

Message is a single chat message (user or assistant).

type Provider

type Provider string

Provider identifies the LLM backend.

const (
	ProviderOpenAI Provider = "openai"
	ProviderOllama Provider = "ollama"
	ProviderGemini Provider = "gemini"
)

func NormalizeProviderFromString

func NormalizeProviderFromString(s string) Provider

NormalizeProviderFromString normalizes a provider string (trim, lower; empty -> openai) and returns the Provider. Use this when reading provider from config (e.g. in modules) to avoid duplicating normalization logic.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL