huggingface

package
v1.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 5, 2026 License: Apache-2.0 Imports: 6 Imported by: 0

Documentation

Overview

Copyright 2026 Teradata

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Client

type Client struct {
	// contains filtered or unexported fields
}

Client implements the LLMProvider interface for HuggingFace Inference API. HuggingFace uses an OpenAI-compatible API, so we wrap the OpenAI client.

func NewClient

func NewClient(config Config) *Client

NewClient creates a new HuggingFace client. HuggingFace uses an OpenAI-compatible API at https://router.huggingface.co/v1

func (*Client) Chat

func (c *Client) Chat(ctx context.Context, messages []llmtypes.Message, tools []shuttle.Tool) (*llmtypes.LLMResponse, error)

Chat sends a conversation to HuggingFace and returns the response. This delegates to the OpenAI client since HuggingFace uses the same API format.

func (*Client) Model

func (c *Client) Model() string

Model returns the model identifier.

func (*Client) Name

func (c *Client) Name() string

Name returns the provider name.

type Config

type Config struct {
	// Required: HuggingFace token from https://huggingface.co/settings/tokens
	// Note: This is a "token" not an "API key" in HuggingFace terminology
	Token string

	// Model to use (default: "meta-llama/Meta-Llama-3.1-70B-Instruct")
	// Available models (examples):
	// - meta-llama/Meta-Llama-3.1-70B-Instruct: Llama 3.1 70B (recommended)
	// - meta-llama/Meta-Llama-3.1-8B-Instruct: Llama 3.1 8B (faster)
	// - mistralai/Mixtral-8x7B-Instruct-v0.1: Mixtral 8x7B
	// - google/gemma-2-9b-it: Gemma 2 9B
	// - Qwen/Qwen2.5-72B-Instruct: Qwen 2.5 72B
	// - Many more available at https://huggingface.co/models
	Model string

	// Optional configuration
	MaxTokens         int           // Default: 4096
	Temperature       float64       // Default: 1.0
	Timeout           time.Duration // Default: 60s
	RateLimiterConfig llm.RateLimiterConfig
}

Config holds configuration for the HuggingFace client.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL