Documentation
¶
Overview ¶
Package faiss provides production-ready Go bindings for Facebook's FAISS (Facebook AI Similarity Search) library, enabling billion-scale similarity search and clustering of dense vectors.
faiss-go offers complete feature parity with Python FAISS, including 18+ index types, GPU acceleration, and advanced features like product quantization, HNSW graphs, and on-disk indexes. Perfect for semantic search, recommendation systems, and image similarity.
Quick Start ¶
Create an index and search for similar vectors:
package main
import (
"fmt"
"log"
"github.com/NerdMeNot/faiss-go"
)
func main() {
// Create index for 128-dimensional vectors
index, err := faiss.NewIndexFlatL2(128)
if err != nil {
log.Fatal(err)
}
defer index.Close()
// Add vectors (flattened: [v1_d1, v1_d2, ..., v2_d1, v2_d2, ...])
vectors := make([]float32, 1000 * 128) // 1000 vectors
// ... populate vectors with your data ...
err = index.Add(vectors)
if err != nil {
log.Fatal(err)
}
// Search for 10 nearest neighbors
query := make([]float32, 128) // Single query vector
// ... populate query ...
distances, indices, err := index.Search(query, 10)
if err != nil {
log.Fatal(err)
}
// Process results
for i := 0; i < 10; i++ {
fmt.Printf("Neighbor %d: index=%d, distance=%.4f\n",
i+1, indices[i], distances[i])
}
}
Index Selection Guide ¶
Choose the right index for your use case:
- IndexFlatL2 / IndexFlatIP: Exact search, 100% recall, best for <100K vectors
- IndexIVFFlat: Fast approximate search, 10-100x speedup, 95%+ recall
- IndexHNSW: Best recall/speed tradeoff, excellent for production
- IndexPQ: 8-32x compression, great for memory-constrained scenarios
- IndexIVFPQ: Combines speed and compression, best overall balance
- IndexOnDisk: For billion-scale datasets that don't fit in RAM
- GPU indexes: 10-100x faster search with CUDA acceleration
Build Modes ¶
Two flexible build modes to fit your workflow:
Pre-built Libraries (fast development):
go build -tags=faiss_use_lib # <30 second builds
Compile from Source (production optimization):
go build # ~5-10 min first time, cached after
Both modes produce identical functionality. Source build requires:
- C++17 compiler (GCC 7+, Clang 5+, MSVC 2019+)
- BLAS library (OpenBLAS, MKL, or Accelerate on macOS)
Production Features ¶
This package provides comprehensive FAISS functionality:
- 18+ Index Types: Flat, IVF, HNSW, PQ, ScalarQuantizer, LSH, GPU, OnDisk
- Training API: Optimize indexes for your data distribution
- Serialization: Save and load indexes from disk
- Range Search: Find all vectors within a distance threshold
- Batch Operations: Efficient bulk add/search
- Vector Reconstruction: Retrieve vectors from compressed indexes
- Clustering: Built-in Kmeans implementation
- Preprocessing: PCA, OPQ, Random Rotation transforms
- Index Factory: Declarative index construction with strings
- Custom IDs: Map external IDs to internal indices
Use Cases ¶
Semantic Search - Document similarity:
embeddings := embedDocuments(docs) // 768-dim BERT/OpenAI index, _ := faiss.NewIndexHNSWFlat(768, 32, faiss.MetricL2) index.Train(embeddings) index.Add(embeddings) distances, indices, _ := index.Search(queryEmbedding, 10)
Image Similarity - Visual search:
features := extractImageFeatures(images) // 2048-dim ResNet quantizer, _ := faiss.NewIndexFlatL2(2048) index, _ := faiss.NewIndexIVFPQ(quantizer, 2048, 1000, 16, 8, faiss.MetricL2) index.Train(features) index.Add(features) _, similar, _ := index.Search(queryFeatures, 20)
Recommendation Systems - Collaborative filtering:
itemEmbeddings := trainEmbeddings(interactions) // 128-dim quantizer, _ := faiss.NewIndexFlatL2(128) index, _ := faiss.NewIndexIVFFlat(quantizer, 128, 4096, faiss.MetricL2) index.Train(itemEmbeddings) index.Add(itemEmbeddings) _, recommended, _ := index.Search(userEmbedding, 50)
Metrics ¶
FAISS supports two distance metrics:
- MetricL2: Euclidean (L2) distance - lower is more similar
- MetricInnerProduct: Inner product - higher is more similar
For cosine similarity, normalize vectors and use MetricInnerProduct:
normalized := normalize(vectors) // Divide by L2 norm index, _ := faiss.NewIndexFlatIP(dimension) index.Add(normalized)
Thread Safety ¶
Index operations are NOT thread-safe by default. For concurrent access:
Option 1 - Use synchronization:
var mu sync.Mutex mu.Lock() defer mu.Unlock() index.Add(vectors)
Option 2 - Separate indexes per goroutine (read-heavy workloads):
indexes := make([]*faiss.IndexFlat, numWorkers)
for i := range indexes {
indexes[i], _ = faiss.NewIndexFlatL2(dimension)
indexes[i].Add(vectors) // Same data in each
}
Memory Management ¶
Always call Close() to free C++ resources:
index, err := faiss.NewIndexFlatL2(128)
if err != nil {
return err
}
defer index.Close() // Essential to prevent memory leaks
Finalizers are set as a safety net, but explicit Close() is recommended.
Platform Support ¶
Supports all major platforms:
- Linux: x86_64, ARM64
- macOS: Intel (x86_64), Apple Silicon (ARM64)
Performance ¶
Performance characteristics (1M 128-dim vectors, M1 Mac):
- IndexFlatL2: 12K QPS, 100% recall (exact search)
- IndexHNSWFlat: 85K QPS, 98.5% recall
- IndexIVFPQ: 120K QPS, 95.2% recall, 16x compression
- PQFastScan: 180K QPS, 95.8% recall, SIMD optimized
See https://github.com/NerdMeNot/faiss-go for comprehensive benchmarks.
Documentation ¶
Complete documentation available at:
- Getting Started: https://github.com/NerdMeNot/faiss-go/docs/getting-started/
- API Reference: https://pkg.go.dev/github.com/NerdMeNot/faiss-go
- Examples: https://github.com/NerdMeNot/faiss-go/docs/examples/
- GitHub: https://github.com/NerdMeNot/faiss-go
Version Information ¶
This package version: v0.1.0-alpha Embedded FAISS version: 1.8.0
Report issues: https://github.com/NerdMeNot/faiss-go/issues
Index ¶
- Constants
- Variables
- func AddBatch(index Index, vectors []float32) error
- func BatchInnerProduct(queries, database []float32, d int) ([]float32, error)
- func BatchL2Distance(queries, database []float32, d int) ([]float32, error)
- func BitstringHammingDistance(a, b []uint8) int
- func ComputeRecall(groundTruth, results []int64, nq, kGt, kResults int) float64
- func CosineSimilarity(a, b []float32) (float32, error)
- func DisableMetrics()
- func EnableMetrics()
- func FullVersion() string
- func Fvec2Bvec(fvec []float32) []uint8
- func GetIndexDescription(index Index) string
- func GetIndexSize(index Index) int64
- func InnerProduct(a, b []float32) (float32, error)
- func IsIndexTrained(index Index) bool
- func KMax(vals []float32, k int) ([]float32, []int64)
- func KMin(vals []float32, k int) ([]float32, []int64)
- func KNN(vectors, queries []float32, d, k int, metric MetricType) ([]float32, []int64, error)
- func L2Distance(a, b []float32) (float32, error)
- func MetricsEnabled() bool
- func NormalizeL2(vectors []float32, d int) error
- func NormalizeL2Copy(vectors []float32, d int) ([]float32, error)
- func PairwiseDistances(x, y []float32, d int, metric MetricType) ([]float32, error)
- func ParseIndexDescription(description string) map[string]interface{}
- func RandNormal(n int) []float32
- func RandSeed(seed int64)
- func RandUniform(n int) []float32
- func RangeKNN(vectors, queries []float32, d, k int, maxDistance float32, metric MetricType) ([]float32, []int64, error)
- func RecommendIndex(n int64, d int, metric MetricType, requirements map[string]interface{}) string
- func ResetMetrics()
- func SearchBatch(index Index, queries []float32, k int) ([]float32, []int64, error)
- func ValidateIndexDescription(description string) error
- func WriteIndexToFile(index Index, filename string) error
- type BatchConfig
- type BuildInfo
- type GenericIndex
- func (idx *GenericIndex) Add(vectors []float32) error
- func (idx *GenericIndex) Close() error
- func (idx *GenericIndex) D() int
- func (idx *GenericIndex) Description() string
- func (idx *GenericIndex) GetNprobe() (int, error)
- func (idx *GenericIndex) IsTrained() bool
- func (idx *GenericIndex) MetricType() MetricType
- func (idx *GenericIndex) Ntotal() int64
- func (idx *GenericIndex) Reset() error
- func (idx *GenericIndex) Search(queries []float32, k int) (distances []float32, labels []int64, err error)
- func (idx *GenericIndex) SetEfSearch(efSearch int) error
- func (idx *GenericIndex) SetNprobe(nprobe int) error
- func (idx *GenericIndex) Train(vectors []float32) error
- func (idx *GenericIndex) WriteToFile(filename string) error
- type Index
- func IndexFactory(d int, description string, metric MetricType) (Index, error)
- func IndexFactoryFromFile(d int, description string, metric MetricType, vectorFile string) (Index, error)
- func NewIndexHNSW(d, M int, metric MetricType) (Index, error)
- func NewIndexHNSWFlat(d, M int, metric MetricType) (Index, error)
- func NewIndexIVFPQ(quantizer Index, d, nlist, M, nbits int) (Index, error)
- func NewIndexPQ(d, M, nbits int, metric MetricType) (Index, error)
- func ReadIndexFromFile(filename string) (Index, error)
- type IndexFlat
- func (idx *IndexFlat) Add(vectors []float32) error
- func (idx *IndexFlat) Close() error
- func (idx *IndexFlat) D() int
- func (idx *IndexFlat) IsTrained() bool
- func (idx *IndexFlat) MetricType() MetricType
- func (idx *IndexFlat) Ntotal() int64
- func (idx *IndexFlat) RangeSearch(queries []float32, radius float32) (*RangeSearchResult, error)
- func (idx *IndexFlat) Reconstruct(key int64) ([]float32, error)
- func (idx *IndexFlat) ReconstructBatch(keys []int64) ([]float32, error)
- func (idx *IndexFlat) ReconstructN(i0, n int64) ([]float32, error)
- func (idx *IndexFlat) Reset() error
- func (idx *IndexFlat) Search(queries []float32, k int) (distances []float32, indices []int64, err error)
- func (idx *IndexFlat) SetEfSearch(efSearch int) error
- func (idx *IndexFlat) SetNprobe(nprobe int) error
- func (idx *IndexFlat) Train(vectors []float32) error
- type IndexIDMap
- func (idx *IndexIDMap) Add(vectors []float32) error
- func (idx *IndexIDMap) AddWithIDs(vectors []float32, ids []int64) error
- func (idx *IndexIDMap) Close() error
- func (idx *IndexIDMap) D() int
- func (idx *IndexIDMap) IsTrained() bool
- func (idx *IndexIDMap) MetricType() MetricType
- func (idx *IndexIDMap) Ntotal() int64
- func (idx *IndexIDMap) RemoveIDs(ids []int64) error
- func (idx *IndexIDMap) Reset() error
- func (idx *IndexIDMap) Search(queries []float32, k int) (distances []float32, indices []int64, err error)
- func (idx *IndexIDMap) SetEfSearch(efSearch int) error
- func (idx *IndexIDMap) SetNprobe(nprobe int) error
- func (idx *IndexIDMap) Train(vectors []float32) error
- type IndexIVFFlat
- func (idx *IndexIVFFlat) Add(vectors []float32) error
- func (idx *IndexIVFFlat) Assign(vectors []float32) ([]int64, error)
- func (idx *IndexIVFFlat) Close() error
- func (idx *IndexIVFFlat) D() int
- func (idx *IndexIVFFlat) IsTrained() bool
- func (idx *IndexIVFFlat) MetricType() MetricType
- func (idx *IndexIVFFlat) Nlist() int
- func (idx *IndexIVFFlat) Nprobe() int
- func (idx *IndexIVFFlat) Ntotal() int64
- func (idx *IndexIVFFlat) RangeSearch(queries []float32, radius float32) (*RangeSearchResult, error)
- func (idx *IndexIVFFlat) Reconstruct(key int64) ([]float32, error)
- func (idx *IndexIVFFlat) ReconstructBatch(keys []int64) ([]float32, error)
- func (idx *IndexIVFFlat) ReconstructN(i0, n int64) ([]float32, error)
- func (idx *IndexIVFFlat) Reset() error
- func (idx *IndexIVFFlat) Search(queries []float32, k int) (distances []float32, indices []int64, err error)
- func (idx *IndexIVFFlat) SetEfSearch(efSearch int) error
- func (idx *IndexIVFFlat) SetNprobe(nprobe int) error
- func (idx *IndexIVFFlat) Train(vectors []float32) error
- type IndexIVFScalarQuantizer
- func (idx *IndexIVFScalarQuantizer) Add(vectors []float32) error
- func (idx *IndexIVFScalarQuantizer) Close() error
- func (idx *IndexIVFScalarQuantizer) CompressionRatio() float64
- func (idx *IndexIVFScalarQuantizer) D() int
- func (idx *IndexIVFScalarQuantizer) IsTrained() bool
- func (idx *IndexIVFScalarQuantizer) MetricType() MetricType
- func (idx *IndexIVFScalarQuantizer) Nlist() int
- func (idx *IndexIVFScalarQuantizer) Nprobe() int
- func (idx *IndexIVFScalarQuantizer) Ntotal() int64
- func (idx *IndexIVFScalarQuantizer) QuantizerType() QuantizerType
- func (idx *IndexIVFScalarQuantizer) Reset() error
- func (idx *IndexIVFScalarQuantizer) Search(queries []float32, k int) (distances []float32, indices []int64, err error)
- func (idx *IndexIVFScalarQuantizer) SetEfSearch(efSearch int) error
- func (idx *IndexIVFScalarQuantizer) SetNprobe(nprobe int) error
- func (idx *IndexIVFScalarQuantizer) Train(vectors []float32) error
- type IndexLSH
- func (idx *IndexLSH) Add(vectors []float32) error
- func (idx *IndexLSH) Close() error
- func (idx *IndexLSH) D() int
- func (idx *IndexLSH) IsTrained() bool
- func (idx *IndexLSH) MetricType() MetricType
- func (idx *IndexLSH) Nbits() int
- func (idx *IndexLSH) Ntotal() int64
- func (idx *IndexLSH) Reset() error
- func (idx *IndexLSH) Search(queries []float32, k int) (distances []float32, indices []int64, err error)
- func (idx *IndexLSH) SetEfSearch(efSearch int) error
- func (idx *IndexLSH) SetNprobe(nprobe int) error
- func (idx *IndexLSH) Train(vectors []float32) error
- type IndexPreTransform
- func (idx *IndexPreTransform) Add(vectors []float32) error
- func (idx *IndexPreTransform) Close() error
- func (idx *IndexPreTransform) D() int
- func (idx *IndexPreTransform) IsTrained() bool
- func (idx *IndexPreTransform) MetricType() MetricType
- func (idx *IndexPreTransform) Ntotal() int64
- func (idx *IndexPreTransform) Reset() error
- func (idx *IndexPreTransform) Search(queries []float32, k int) (distances []float32, indices []int64, err error)
- func (idx *IndexPreTransform) SetEfSearch(efSearch int) error
- func (idx *IndexPreTransform) SetNprobe(nprobe int) error
- func (idx *IndexPreTransform) Train(vectors []float32) error
- type IndexRefine
- func (idx *IndexRefine) Add(vectors []float32) error
- func (idx *IndexRefine) Close() error
- func (idx *IndexRefine) D() int
- func (idx *IndexRefine) IsTrained() bool
- func (idx *IndexRefine) MetricType() MetricType
- func (idx *IndexRefine) Ntotal() int64
- func (idx *IndexRefine) Reset() error
- func (idx *IndexRefine) Search(queries []float32, k int) (distances []float32, indices []int64, err error)
- func (idx *IndexRefine) SetEfSearch(efSearch int) error
- func (idx *IndexRefine) SetK_factor(kFactor float32) error
- func (idx *IndexRefine) SetNprobe(nprobe int) error
- func (idx *IndexRefine) Train(vectors []float32) error
- type IndexScalarQuantizer
- func (idx *IndexScalarQuantizer) Add(vectors []float32) error
- func (idx *IndexScalarQuantizer) Close() error
- func (idx *IndexScalarQuantizer) CompressionRatio() float64
- func (idx *IndexScalarQuantizer) D() int
- func (idx *IndexScalarQuantizer) IsTrained() bool
- func (idx *IndexScalarQuantizer) MetricType() MetricType
- func (idx *IndexScalarQuantizer) Ntotal() int64
- func (idx *IndexScalarQuantizer) QuantizerType() QuantizerType
- func (idx *IndexScalarQuantizer) Reset() error
- func (idx *IndexScalarQuantizer) Search(queries []float32, k int) (distances []float32, indices []int64, err error)
- func (idx *IndexScalarQuantizer) SetEfSearch(efSearch int) error
- func (idx *IndexScalarQuantizer) SetNprobe(nprobe int) error
- func (idx *IndexScalarQuantizer) Train(vectors []float32) error
- type IndexShards
- func (idx *IndexShards) Add(vectors []float32) error
- func (idx *IndexShards) AddShard(shard Index) error
- func (idx *IndexShards) Close() error
- func (idx *IndexShards) D() int
- func (idx *IndexShards) IsTrained() bool
- func (idx *IndexShards) MetricType() MetricType
- func (idx *IndexShards) Ntotal() int64
- func (idx *IndexShards) Reset() error
- func (idx *IndexShards) Search(queries []float32, k int) (distances []float32, indices []int64, err error)
- func (idx *IndexShards) SetEfSearch(efSearch int) error
- func (idx *IndexShards) SetNprobe(nprobe int) error
- func (idx *IndexShards) Train(vectors []float32) error
- type IndexWithAssign
- type IndexWithIDs
- type IndexWithReconstruction
- type Kmeans
- type MetricType
- type Metrics
- type MetricsSnapshot
- type OPQMatrix
- func (opq *OPQMatrix) Apply(vectors []float32) ([]float32, error)
- func (opq *OPQMatrix) Close() error
- func (opq *OPQMatrix) DIn() int
- func (opq *OPQMatrix) DOut() int
- func (opq *OPQMatrix) GetM() int
- func (opq *OPQMatrix) IsTrained() bool
- func (opq *OPQMatrix) ReverseTransform(vectors []float32) ([]float32, error)
- func (opq *OPQMatrix) Train(vectors []float32) error
- type OperationStats
- type PCAMatrix
- func (pca *PCAMatrix) Apply(vectors []float32) ([]float32, error)
- func (pca *PCAMatrix) Close() error
- func (pca *PCAMatrix) DIn() int
- func (pca *PCAMatrix) DOut() int
- func (pca *PCAMatrix) IsTrained() bool
- func (pca *PCAMatrix) ReverseTransform(vectors []float32) ([]float32, error)
- func (pca *PCAMatrix) Train(vectors []float32) error
- type QuantizerType
- type RandomRotationMatrix
- func (rr *RandomRotationMatrix) Apply(vectors []float32) ([]float32, error)
- func (rr *RandomRotationMatrix) Close() error
- func (rr *RandomRotationMatrix) DIn() int
- func (rr *RandomRotationMatrix) DOut() int
- func (rr *RandomRotationMatrix) IsTrained() bool
- func (rr *RandomRotationMatrix) ReverseTransform(vectors []float32) ([]float32, error)
- func (rr *RandomRotationMatrix) Train(vectors []float32) error
- type RangeSearchResult
- type SearchResult
- type Timer
- type VectorStats
- type VectorTransform
Constants ¶
const ( IndexTypeFlat = "Flat" IndexTypeIVF = "IVF" IndexTypeHNSW = "HNSW" IndexTypePQ = "PQ" IndexTypeSQ = "SQ" IndexTypeLSH = "LSH" IndexTypePreTransform = "PreTransform" IndexTypeUnknown = "unknown" // Common index configurations IndexDescFlat = "Flat" IndexDescHNSW32 = "HNSW32" )
Index type constants for factory descriptions
const ( // FAISSVersion is the upstream FAISS library version FAISSVersion = "1.13.2" // BindingMajor is incremented for new faiss-go features/interfaces BindingMajor = 0 // BindingMinor is incremented for bug fixes and improvements BindingMinor = 1 // Version is the full faiss-go version string (auto-generated) Version = FAISSVersion + "-" + "0.1" // Note: Can't use fmt.Sprintf in const )
Version information
Versioning scheme: v{FAISS_VERSION}-{BINDING_MAJOR}.{BINDING_MINOR} Example: v1.13.2-0.1
- FAISS_VERSION: The upstream FAISS library version this is built against - BINDING_MAJOR: Incremented for new faiss-go features/interfaces (not in upstream FAISS) - BINDING_MINOR: Incremented for bug fixes and minor improvements
When FAISS releases a new version, reset binding version to 0.1
const DefaultAddBatchSize = 100000
DefaultAddBatchSize is the recommended batch size for add operations
const DefaultSearchBatchSize = 10000
DefaultSearchBatchSize is the recommended batch size for search operations
Variables ¶
var ( // ErrInvalidDimension is returned when dimension is invalid ErrInvalidDimension = errors.New("faiss: invalid dimension (must be > 0)") // ErrInvalidVectors is returned when vector data is invalid ErrInvalidVectors = errors.New("faiss: invalid vectors (length must be multiple of dimension)") // ErrIndexNotTrained is returned when operation requires trained index ErrIndexNotTrained = errors.New("faiss: index not trained") // ErrNullPointer is returned when C pointer is null ErrNullPointer = errors.New("faiss: null pointer") )
var ( // ErrNotTrained is returned when an operation requires a trained index ErrNotTrained = errors.New("faiss: index not trained") // ErrIDNotFound is returned when an ID is not in the index ErrIDNotFound = errors.New("faiss: ID not found") // ErrInvalidK is returned when k is invalid for search ErrInvalidK = errors.New("faiss: k must be positive") // ErrInvalidRadius is returned when radius is invalid ErrInvalidRadius = errors.New("faiss: invalid radius") )
Functions ¶
func AddBatch ¶
AddBatch is a helper to demonstrate optimal batch addition Use this pattern when adding multiple vectors
func BatchInnerProduct ¶
BatchInnerProduct computes inner products for batches of vectors queries and database should be flat arrays (n*d and m*d)
Returns n×m matrix of inner products
func BatchL2Distance ¶
BatchL2Distance computes L2 distances for batches of vectors queries and database should be flat arrays (n*d and m*d)
Returns n×m matrix of distances
func BitstringHammingDistance ¶
BitstringHammingDistance computes Hamming distance between two binary strings
Python equivalent: faiss.hamming
Example:
a := []uint8{0b10101010}
b := []uint8{0b11001100}
dist := faiss.BitstringHammingDistance(a, b) // 4
func ComputeRecall ¶
ComputeRecall computes recall between ground truth and search results
Recall = fraction of true neighbors found in the results
Parameters:
- groundTruth: ground truth neighbor indices (nq x k_gt)
- results: search result indices (nq x k_results)
- nq: number of queries
- kGt: k for ground truth
- kResults: k for results
Returns: recall value between 0 and 1
func CosineSimilarity ¶
CosineSimilarity computes cosine similarity between two vectors Returns value in [-1, 1] where 1 = identical direction, -1 = opposite
Example:
a := []float32{1.0, 0.0}
b := []float32{0.0, 1.0}
sim := faiss.CosineSimilarity(a, b) // 0.0 (perpendicular)
func FullVersion ¶
func FullVersion() string
FullVersion returns the complete version string with 'v' prefix Example: "v1.13.2-0.1"
func Fvec2Bvec ¶
Fvec2Bvec converts float vectors to binary vectors by thresholding at 0
Python equivalent: faiss.fvec2bvec
Example:
fvec := []float32{-1.0, 0.5, -0.3, 1.2} // 4 values
bvec := faiss.Fvec2Bvec(fvec) // [false, true, false, true] -> 0b1010 = 10
func GetIndexDescription ¶
GetIndexDescription returns a human-readable description of an index
Example:
desc := faiss.GetIndexDescription(index) // "IndexFlatL2(d=128, ntotal=10000)"
func GetIndexSize ¶
GetIndexSize returns the memory footprint estimate in bytes
func InnerProduct ¶
InnerProduct computes inner product between two vectors
Example:
a := []float32{1.0, 2.0, 3.0}
b := []float32{4.0, 5.0, 6.0}
ip := faiss.InnerProduct(a, b) // 32.0
func IsIndexTrained ¶
IsIndexTrained checks if an index is trained
func KMax ¶
KMax finds the k largest values and their indices
Python equivalent: faiss.kmax
Example:
vals := []float32{3.0, 1.0, 4.0, 1.0, 5.0}
maxVals, maxIdx := faiss.KMax(vals, 3)
// maxVals = [5.0, 4.0, 3.0]
// maxIdx = [4, 2, 0]
func KMin ¶
KMin finds the k smallest values and their indices
Python equivalent: faiss.kmin
Example:
vals := []float32{3.0, 1.0, 4.0, 1.0, 5.0}
minVals, minIdx := faiss.KMin(vals, 3)
// minVals = [1.0, 1.0, 3.0]
// minIdx = [1, 3, 0]
func KNN ¶
KNN performs k-nearest neighbor search on a matrix
This is a standalone function that doesn't require creating an index. For repeated searches, it's better to create an index.
Parameters:
- vectors: database vectors (n vectors of dimension d)
- queries: query vectors (nq vectors of dimension d)
- d: dimension
- k: number of neighbors
- metric: distance metric
Returns: distances and indices for each query
func L2Distance ¶
L2Distance computes L2 (Euclidean) distance between two vectors
Example:
a := []float32{1.0, 2.0, 3.0}
b := []float32{4.0, 5.0, 6.0}
dist := faiss.L2Distance(a, b) // sqrt(27) ≈ 5.196
func MetricsEnabled ¶
func MetricsEnabled() bool
MetricsEnabled returns true if metrics collection is enabled.
func NormalizeL2 ¶
NormalizeL2 normalizes vectors to unit L2 norm (in place)
This is commonly used before adding vectors to an IndexFlatIP for cosine similarity search.
Python equivalent: faiss.normalize_L2(x)
Example:
vectors := []float32{ /* your vectors */ }
faiss.NormalizeL2(vectors, dimension)
index.Add(vectors) // Now using cosine similarity
func NormalizeL2Copy ¶
NormalizeL2Copy normalizes vectors to unit L2 norm (creates a copy)
Returns a new slice with normalized vectors, leaving the input unchanged.
func PairwiseDistances ¶
func PairwiseDistances(x, y []float32, d int, metric MetricType) ([]float32, error)
PairwiseDistances computes pairwise distances between two sets of vectors
Python equivalent: faiss.pairwise_distances(x, y, metric)
Parameters:
- x: first set of vectors (n1 vectors of dimension d)
- y: second set of vectors (n2 vectors of dimension d)
- d: dimension
- metric: distance metric
Returns: distance matrix of size n1 x n2
func ParseIndexDescription ¶
ParseIndexDescription parses an index factory description and returns its components. This is useful for understanding what a factory string will create.
Example:
info := ParseIndexDescription("IVF100,PQ8")
// info["type"] = "IVF"
// info["nlist"] = 100
// info["storage"] = "PQ8"
func RandNormal ¶
RandNormal generates n random floats from standard normal distribution N(0,1)
Python equivalent: faiss.randn
Example:
vals := faiss.RandNormal(1000)
func RandSeed ¶
func RandSeed(seed int64)
RandSeed sets the random seed for reproducibility
Example:
faiss.RandSeed(42) // Reproducible results
func RandUniform ¶
RandUniform generates n random floats uniformly in [0, 1)
Python equivalent: faiss.rand
Example:
vals := faiss.RandUniform(1000)
func RangeKNN ¶
func RangeKNN(vectors, queries []float32, d, k int, maxDistance float32, metric MetricType) ([]float32, []int64, error)
RangeKNN finds k nearest neighbors within a maximum distance
This combines k-NN search with a distance threshold.
Parameters:
- vectors: database vectors
- queries: query vectors
- d: dimension
- k: maximum number of neighbors
- maxDistance: maximum distance threshold
- metric: distance metric
Returns: distances and indices (may have fewer than k results per query)
func RecommendIndex ¶
func RecommendIndex(n int64, d int, metric MetricType, requirements map[string]interface{}) string
RecommendIndex recommends an index configuration based on dataset characteristics.
This provides guidance similar to FAISS's auto-tuning, helping users choose appropriate index types without deep FAISS knowledge.
Parameters:
- n: expected number of vectors
- d: dimension of vectors
- metric: distance metric (MetricL2 or MetricInnerProduct)
- requirements: optional requirements map with keys:
- "recall": target recall (0.0-1.0, default 0.9)
- "speed": preference ("fast", "balanced", "accurate", default "balanced")
- "memory": preference ("low", "medium", "high", default "medium")
- "build_time": preference ("fast", "medium", "slow", default "medium")
Returns a recommended factory description string.
Example:
desc := RecommendIndex(1000000, 128, MetricL2, map[string]interface{}{
"recall": 0.95,
"speed": "fast",
})
// Returns: "HNSW32"
func SearchBatch ¶
SearchBatch is a helper to demonstrate optimal batch searching Use this pattern when searching multiple queries
func ValidateIndexDescription ¶
ValidateIndexDescription checks if a factory description string is valid. Returns nil if valid, error if invalid.
This doesn't create the index, just validates the syntax.
func WriteIndexToFile ¶
WriteIndexToFile saves the index to a file
Python equivalent: faiss.write_index(index, filename)
Example:
index, _ := faiss.NewIndexFlatL2(128) index.Add(vectors) faiss.WriteIndexToFile(index, "my_index.faiss")
Types ¶
type BatchConfig ¶
type BatchConfig struct {
// BatchSize is the number of vectors per batch
// Default: 10000 for search, 100000 for add
BatchSize int
// UseThreadLock controls whether to lock goroutine to OS thread
// Default: auto (true for batches > 100)
UseThreadLock *bool
}
BatchConfig provides configuration for batch operations
type BuildInfo ¶
type BuildInfo struct {
// Version is the faiss-go version
Version string
// FAISSVersion is the FAISS library version
FAISSVersion string
// BuildMode is either "source" or "prebuilt"
BuildMode string
// Compiler is the C++ compiler used
Compiler string
// Platform is the OS/architecture
Platform string
// BLASBackend is the BLAS library used
BLASBackend string
}
BuildInfo contains information about the build configuration
func GetBuildInfo ¶
func GetBuildInfo() BuildInfo
GetBuildInfo returns information about how faiss-go was built
type GenericIndex ¶
type GenericIndex struct {
// contains filtered or unexported fields
}
GenericIndex wraps any index created by the factory function. This provides a universal interface for all FAISS index types, including those that don't have specific constructors in the C API (like HNSW, PQ, IVFPQ, etc.)
Python equivalent: Using faiss.index_factory() returns a generic Index object
Example:
// Create HNSW index (no direct constructor available) index, _ := faiss.IndexFactory(128, "HNSW32", faiss.MetricL2) // Create PQ index index, _ := faiss.IndexFactory(128, "PQ8", faiss.MetricL2) // Create IVF+PQ index index, _ := faiss.IndexFactory(128, "IVF100,PQ8", faiss.MetricL2)
func (*GenericIndex) Add ¶
func (idx *GenericIndex) Add(vectors []float32) error
Add adds vectors to the index
For indexes that require training, Train() must be called first.
func (*GenericIndex) Close ¶
func (idx *GenericIndex) Close() error
Close releases the resources used by the index
func (*GenericIndex) Description ¶
func (idx *GenericIndex) Description() string
Description returns the factory description string used to create this index
func (*GenericIndex) GetNprobe ¶
func (idx *GenericIndex) GetNprobe() (int, error)
GetNprobe gets the number of lists to probe during search (IVF indexes only)
func (*GenericIndex) IsTrained ¶
func (idx *GenericIndex) IsTrained() bool
IsTrained returns whether the index has been trained
func (*GenericIndex) MetricType ¶
func (idx *GenericIndex) MetricType() MetricType
MetricType returns the metric type used by this index
func (*GenericIndex) Ntotal ¶
func (idx *GenericIndex) Ntotal() int64
Ntotal returns the total number of vectors in the index
func (*GenericIndex) Reset ¶
func (idx *GenericIndex) Reset() error
Reset removes all vectors from the index
func (*GenericIndex) Search ¶
func (idx *GenericIndex) Search(queries []float32, k int) (distances []float32, labels []int64, err error)
Search performs k-nearest neighbor search
Parameters:
- queries: flattened query vectors (length must be d * nq)
- k: number of nearest neighbors to find
Returns:
- distances: distances to nearest neighbors (nq * k)
- labels: IDs of nearest neighbors (nq * k)
- error: any error that occurred
func (*GenericIndex) SetEfSearch ¶
func (idx *GenericIndex) SetEfSearch(efSearch int) error
SetEfSearch sets the search-time effort parameter for HNSW indexes.
The efSearch parameter controls how many nodes are visited during search. Higher values give better recall but slower search.
Recommended values:
- efSearch=16: Fast search, lower recall (default)
- efSearch=32-64: Balanced (recommended)
- efSearch=128+: High recall, slower search
Note: This only works for HNSW indexes. Returns error for other index types.
Example:
index, _ := faiss.IndexFactory(128, "HNSW32", faiss.MetricL2) index.SetEfSearch(64) // Increase search effort for better recall
func (*GenericIndex) SetNprobe ¶
func (idx *GenericIndex) SetNprobe(nprobe int) error
SetNprobe sets the number of lists to probe during search (IVF indexes only)
This parameter controls the speed/accuracy tradeoff for IVF indexes:
- nprobe=1: Fastest search, lowest recall
- nprobe=nlist: Exhaustive search, highest recall (equivalent to brute force)
- Recommended: nprobe = 10-20 for balanced performance
Only works for IVF-based indexes (IVFFlat, IVFPQ, IVFSQ, etc.) Returns an error if called on non-IVF indexes.
Example:
index, _ := faiss.IndexFactory(128, "IVF100,Flat", faiss.MetricL2) index.SetNprobe(10) // Search 10 of 100 clusters
func (*GenericIndex) Train ¶
func (idx *GenericIndex) Train(vectors []float32) error
Train trains the index on a set of vectors
Not all indexes require training (e.g., Flat indexes don't need it), but IVF-based and quantization-based indexes do.
func (*GenericIndex) WriteToFile ¶
func (idx *GenericIndex) WriteToFile(filename string) error
WriteToFile writes the index to a file
type Index ¶
type Index interface {
// Basic properties
D() int // Dimension of vectors
Ntotal() int64 // Total number of indexed vectors
IsTrained() bool // Whether the index has been trained
MetricType() MetricType // Metric type (L2, IP, etc.)
// Training (required for some index types like IVF, PQ)
Train(vectors []float32) error
// Adding vectors
Add(vectors []float32) error
// Searching
Search(queries []float32, k int) (distances []float32, indices []int64, err error)
// Parameter setters (index-specific, may return error if not supported)
SetNprobe(nprobe int) error // IVF indexes: number of clusters to search
SetEfSearch(efSearch int) error // HNSW indexes: search-time effort parameter
// Management
Reset() error // Remove all vectors
Close() error // Free resources
}
Index is the base interface for all FAISS indexes This matches the Python FAISS Index API
func IndexFactory ¶
func IndexFactory(d int, description string, metric MetricType) (Index, error)
IndexFactory creates an index from a description string using FAISS's index_factory. This is THE KEY function that unlocks ALL index types including HNSW, PQ, IVFPQ, and more.
Unlike previous implementations that parsed strings manually and returned errors, this now uses the actual FAISS C API's faiss_index_factory() function, which supports ALL index types that Python FAISS supports.
Python equivalent: faiss.index_factory(d, description, metric)
Supported descriptions (non-exhaustive list):
Basic indexes:
- "Flat" -> Exact search (IndexFlatL2 or IndexFlatIP)
- "LSH" -> Locality-sensitive hashing
- "PQn" -> Product quantization (n = number of bytes)
- "SQn" -> Scalar quantization (n = 4, 6, or 8 bits)
IVF (Inverted File) indexes:
- "IVFn,Flat" -> IVF with n clusters, flat storage
- "IVFn,PQ8" -> IVF with n clusters, PQ encoding (8 bytes)
- "IVFn,SQ8" -> IVF with n clusters, scalar quantization
HNSW (Hierarchical Navigable Small World) indexes:
- "HNSWn" -> HNSW with M=n (recommended: 16, 32, or 64)
- "HNSW32,Flat" -> HNSW graph with flat refinement
Pre-transform indexes:
- "PCAn,..." -> Apply PCA to reduce to n dimensions first
- "OPQn,..." -> Apply Optimized Product Quantization
- "RRn,..." -> Apply Random Rotation
Refinement:
- "...,Refine(Flat)" -> Two-stage search with refinement
Examples:
// Create HNSW index (fast, accurate approximate search) index, _ := IndexFactory(128, "HNSW32", MetricL2) // Create IVF+PQ index (compressed, scalable) index, _ := IndexFactory(128, "IVF100,PQ8", MetricL2) // Create PCA+IVF index (dimension reduction + clustering) index, _ := IndexFactory(128, "PCA64,IVF100,Flat", MetricL2) // Create exact search index index, _ := IndexFactory(128, "Flat", MetricL2)
func IndexFactoryFromFile ¶
func IndexFactoryFromFile(d int, description string, metric MetricType, vectorFile string) (Index, error)
IndexFactoryFromFile creates an index from a factory description and immediately loads vectors from a file.
This is a convenience function that combines IndexFactory + Train + Add.
func NewIndexHNSW ¶
func NewIndexHNSW(d, M int, metric MetricType) (Index, error)
NewIndexHNSW creates a new HNSW index (alias for NewIndexHNSWFlat for compatibility).
This is equivalent to NewIndexHNSWFlat and provided for API compatibility with Python FAISS where IndexHNSWFlat is the main HNSW implementation.
func NewIndexHNSWFlat ¶
func NewIndexHNSWFlat(d, M int, metric MetricType) (Index, error)
NewIndexHNSWFlat creates a new HNSW index with flat (uncompressed) storage.
HNSW (Hierarchical Navigable Small World) is a graph-based approximate nearest neighbor search algorithm that provides excellent recall/speed tradeoffs.
Parameters:
- d: dimension of vectors
- M: number of connections per layer (typical values: 16, 32, 64) Higher M = better recall but more memory and slower build
- metric: distance metric (MetricL2 or MetricInnerProduct)
The index does NOT require training. You can add vectors immediately.
Recommended M values:
- M=16: Fast build, moderate memory (good for prototyping)
- M=32: Balanced (recommended for production)
- M=64: Best recall, higher memory
Python equivalent: faiss.IndexHNSWFlat(d, M)
Example:
index, err := faiss.NewIndexHNSWFlat(128, 32, faiss.MetricL2)
if err != nil {
log.Fatal(err)
}
defer index.Close()
// No training needed for HNSW
err = index.Add(vectors)
if err != nil {
log.Fatal(err)
}
distances, indices, err := index.Search(query, 10)
func NewIndexIVFPQ ¶
NewIndexIVFPQ creates a new IVF index with product quantization (PQ) compression.
IVFPQ combines inverted file indexing with product quantization for both speed and memory efficiency. This is one of the most popular index types for large-scale similarity search.
Parameters:
- quantizer: a trained coarse quantizer (typically IndexFlat created via IndexFactory)
- d: dimension of vectors
- nlist: number of inverted lists (clusters)
- M: number of subquantizers (must divide d evenly)
- nbits: number of bits per subquantizer (typically 8)
The index requires training before adding vectors.
Recommended parameters:
- nlist: sqrt(n) where n is the number of vectors
- M: d/4 or d/8 (must divide d evenly)
- nbits: 8 (higher = better accuracy but more memory)
Python equivalent: faiss.IndexIVFPQ(quantizer, d, nlist, M, nbits)
Example:
// Create via factory (recommended approach)
index, err := faiss.IndexFactory(128, "IVF100,PQ8", faiss.MetricL2)
if err != nil {
log.Fatal(err)
}
defer index.Close()
// Train the index
err = index.Train(trainingVectors)
if err != nil {
log.Fatal(err)
}
// Add vectors
err = index.Add(vectors)
if err != nil {
log.Fatal(err)
}
func NewIndexPQ ¶
func NewIndexPQ(d, M, nbits int, metric MetricType) (Index, error)
NewIndexPQ creates a standalone Product Quantization index (without IVF).
PQ encodes vectors into compact codes for memory-efficient storage. Use this when memory is constrained but you don't need IVF clustering.
Parameters:
- d: dimension of vectors
- M: number of subquantizers (must divide d evenly)
- nbits: number of bits per subquantizer (typically 8)
- metric: distance metric (MetricL2 or MetricInnerProduct)
The index requires training before adding vectors.
Python equivalent: faiss.IndexPQ(d, M, nbits)
func ReadIndexFromFile ¶
ReadIndexFromFile loads an index from a file
Python equivalent: faiss.read_index(filename)
Example:
index, err := faiss.ReadIndexFromFile("my_index.faiss")
if err != nil {
log.Fatal(err)
}
defer index.Close()
type IndexFlat ¶
type IndexFlat struct {
// contains filtered or unexported fields
}
IndexFlat represents a flat (brute-force) index
func NewIndexFlat ¶
func NewIndexFlat(d int, metric MetricType) (*IndexFlat, error)
NewIndexFlat creates a new flat index with the specified metric. This is the recommended constructor for flat indexes as it follows the same pattern as other index constructors (NewIndexHNSW, NewIndexPQ, etc.).
Parameters:
- d: dimension of vectors
- metric: distance metric (MetricL2 or MetricInnerProduct)
Example:
index, err := faiss.NewIndexFlat(128, faiss.MetricL2)
if err != nil {
log.Fatal(err)
}
defer index.Close()
func NewIndexFlatIP ¶
NewIndexFlatIP creates a new flat index using inner product
func NewIndexFlatL2 ¶
NewIndexFlatL2 creates a new flat index using L2 distance
func (*IndexFlat) IsTrained ¶
IsTrained returns whether the index has been trained (always true for flat indexes)
func (*IndexFlat) MetricType ¶
func (idx *IndexFlat) MetricType() MetricType
MetricType returns the metric type used by the index
func (*IndexFlat) RangeSearch ¶
func (idx *IndexFlat) RangeSearch(queries []float32, radius float32) (*RangeSearchResult, error)
RangeSearch performs range search on indexes that support it
Returns all vectors within the specified radius for each query. The radius interpretation depends on the metric:
- L2: radius is the maximum squared L2 distance
- Inner Product: radius is the minimum inner product
Python equivalent: lims, D, I = index.range_search(x, radius)
Example:
result, err := index.RangeSearch(queries, 0.5)
for i := 0; i < result.Nq; i++ {
labels, distances := result.GetResults(i)
fmt.Printf("Query %d: %d results\n", i, len(labels))
}
func (*IndexFlat) Reconstruct ¶
Reconstruct reconstructs a single vector by its index
This is useful for debugging and verifying what's stored in the index. Not all index types support reconstruction.
Python equivalent: vector = index.reconstruct(key)
Example:
vector, err := index.Reconstruct(42)
fmt.Printf("Vector 42: %v\n", vector)
func (*IndexFlat) ReconstructBatch ¶
ReconstructBatch reconstructs multiple vectors by their indices
Python equivalent: vectors = index.reconstruct_batch(keys)
func (*IndexFlat) ReconstructN ¶
ReconstructN reconstructs multiple consecutive vectors
Parameters:
- i0: starting index
- n: number of vectors to reconstruct
Returns a flattened array of n vectors (length = n * d)
Python equivalent: vectors = index.reconstruct_n(i0, n)
func (*IndexFlat) Search ¶
func (idx *IndexFlat) Search(queries []float32, k int) (distances []float32, indices []int64, err error)
Search searches for the k nearest neighbors of the query vectors For large batches (>100 queries), this releases the Go scheduler during the C++ computation to prevent blocking other goroutines
func (*IndexFlat) SetEfSearch ¶
SetEfSearch is not supported for flat indexes (not an HNSW index)
type IndexIDMap ¶
type IndexIDMap struct {
// contains filtered or unexported fields
}
IndexIDMap wraps another index to support custom IDs This allows you to use your own IDs instead of sequential indices
Python equivalent: faiss.IndexIDMap, faiss.IndexIDMap2
func NewIndexIDMap ¶
func NewIndexIDMap(baseIndex Index) (*IndexIDMap, error)
NewIndexIDMap creates a new index with ID mapping
Parameters:
- baseIndex: the underlying index to wrap
Example:
flatIndex, _ := faiss.NewIndexFlatL2(128)
idmapIndex, _ := faiss.NewIndexIDMap(flatIndex)
idmapIndex.AddWithIDs(vectors, []int64{100, 200, 300})
func (*IndexIDMap) Add ¶
func (idx *IndexIDMap) Add(vectors []float32) error
Add adds vectors with auto-generated sequential IDs For custom IDs, use AddWithIDs instead
func (*IndexIDMap) AddWithIDs ¶
func (idx *IndexIDMap) AddWithIDs(vectors []float32, ids []int64) error
AddWithIDs adds vectors with custom IDs
Parameters:
- vectors: flattened vector data (length must be d * len(ids))
- ids: custom IDs for each vector
Example:
vectors := []float32{ /* 3 vectors of dimension d */ }
ids := []int64{1000, 2000, 3000}
index.AddWithIDs(vectors, ids)
func (*IndexIDMap) IsTrained ¶
func (idx *IndexIDMap) IsTrained() bool
IsTrained returns whether the base index is trained
func (*IndexIDMap) MetricType ¶
func (idx *IndexIDMap) MetricType() MetricType
MetricType returns the metric type
func (*IndexIDMap) Ntotal ¶
func (idx *IndexIDMap) Ntotal() int64
Ntotal returns the total number of vectors
func (*IndexIDMap) RemoveIDs ¶
func (idx *IndexIDMap) RemoveIDs(ids []int64) error
RemoveIDs removes vectors by their custom IDs
Parameters:
- ids: slice of IDs to remove
Returns the number of vectors actually removed ¶
NOT AVAILABLE: faiss_IndexIDMap_remove_ids not in static library
func (*IndexIDMap) Search ¶
func (idx *IndexIDMap) Search(queries []float32, k int) (distances []float32, indices []int64, err error)
Search searches for k nearest neighbors Returns distances and the custom IDs
func (*IndexIDMap) SetEfSearch ¶
func (idx *IndexIDMap) SetEfSearch(efSearch int) error
SetEfSearch delegates to the base index if it supports it
func (*IndexIDMap) SetNprobe ¶
func (idx *IndexIDMap) SetNprobe(nprobe int) error
SetNprobe delegates to the base index if it supports it
func (*IndexIDMap) Train ¶
func (idx *IndexIDMap) Train(vectors []float32) error
Train trains the base index
type IndexIVFFlat ¶
type IndexIVFFlat struct {
// contains filtered or unexported fields
}
IndexIVFFlat is an inverted file index with flat (uncompressed) vectors This is one of the most commonly used indexes in production. It requires training before adding vectors.
Python equivalent: faiss.IndexIVFFlat
func NewIndexIVFFlat ¶
func NewIndexIVFFlat(quantizer Index, d, nlist int, metric MetricType) (*IndexIVFFlat, error)
NewIndexIVFFlat creates a new IVF index with flat storage
Parameters:
- quantizer: a trained index used for coarse quantization (typically IndexFlat) NOTE: This parameter is accepted for API compatibility but not used. The factory creates its own quantizer internally.
- d: dimension of vectors
- nlist: number of inverted lists (clusters)
- metric: distance metric (MetricL2 or MetricInnerProduct)
The index must be trained before adding vectors.
Implementation note: This function uses the factory pattern internally to avoid C pointer management bugs. The quantizer parameter is accepted for API compatibility with Python FAISS but is not used.
func (*IndexIVFFlat) Add ¶
func (idx *IndexIVFFlat) Add(vectors []float32) error
Add adds vectors to the index The index must be trained before calling this
func (*IndexIVFFlat) Assign ¶
func (idx *IndexIVFFlat) Assign(vectors []float32) ([]int64, error)
Assign assigns vectors to their nearest cluster (inverted list)
This uses the quantizer to find the nearest centroid for each vector. The returned labels are cluster IDs in [0, nlist-1]. A label of -1 indicates that no cluster was found for that vector (vector too far from all centroids).
func (*IndexIVFFlat) IsTrained ¶
func (idx *IndexIVFFlat) IsTrained() bool
IsTrained returns whether the index has been trained
func (*IndexIVFFlat) MetricType ¶
func (idx *IndexIVFFlat) MetricType() MetricType
MetricType returns the metric type
func (*IndexIVFFlat) Nlist ¶
func (idx *IndexIVFFlat) Nlist() int
Nlist returns the number of inverted lists
func (*IndexIVFFlat) Nprobe ¶
func (idx *IndexIVFFlat) Nprobe() int
Nprobe returns the number of lists to probe during search
func (*IndexIVFFlat) Ntotal ¶
func (idx *IndexIVFFlat) Ntotal() int64
Ntotal returns the total number of vectors
func (*IndexIVFFlat) RangeSearch ¶
func (idx *IndexIVFFlat) RangeSearch(queries []float32, radius float32) (*RangeSearchResult, error)
RangeSearch for IVF indexes
func (*IndexIVFFlat) Reconstruct ¶
func (idx *IndexIVFFlat) Reconstruct(key int64) ([]float32, error)
Reconstruction for IVF indexes
func (*IndexIVFFlat) ReconstructBatch ¶
func (idx *IndexIVFFlat) ReconstructBatch(keys []int64) ([]float32, error)
func (*IndexIVFFlat) ReconstructN ¶
func (idx *IndexIVFFlat) ReconstructN(i0, n int64) ([]float32, error)
func (*IndexIVFFlat) Reset ¶
func (idx *IndexIVFFlat) Reset() error
Reset removes all vectors from the index
func (*IndexIVFFlat) Search ¶
func (idx *IndexIVFFlat) Search(queries []float32, k int) (distances []float32, indices []int64, err error)
Search searches for k nearest neighbors
func (*IndexIVFFlat) SetEfSearch ¶
func (idx *IndexIVFFlat) SetEfSearch(efSearch int) error
SetEfSearch is not supported for IVF indexes (not an HNSW index)
func (*IndexIVFFlat) SetNprobe ¶
func (idx *IndexIVFFlat) SetNprobe(nprobe int) error
SetNprobe sets the number of lists to probe during search Higher values = better recall but slower search Default is 1, typical values are 1-nlist
func (*IndexIVFFlat) Train ¶
func (idx *IndexIVFFlat) Train(vectors []float32) error
Train trains the index on a representative set of vectors This is REQUIRED before adding vectors to IVF indexes
type IndexIVFScalarQuantizer ¶
type IndexIVFScalarQuantizer struct {
// contains filtered or unexported fields
}
IndexIVFScalarQuantizer combines IVF with scalar quantization
Python equivalent: faiss.IndexIVFScalarQuantizer
Example:
quantizer, _ := faiss.NewIndexFlatL2(128) index, _ := faiss.NewIndexIVFScalarQuantizer(quantizer, 128, 100, faiss.QT_8bit, faiss.MetricL2) index.Train(trainingVectors) index.SetNprobe(10) index.Add(vectors)
func NewIndexIVFScalarQuantizer ¶
func NewIndexIVFScalarQuantizer(quantizer Index, d, nlist int, qtype QuantizerType, metric MetricType) (*IndexIVFScalarQuantizer, error)
NewIndexIVFScalarQuantizer creates a new IVF scalar quantizer index
func (*IndexIVFScalarQuantizer) Add ¶
func (idx *IndexIVFScalarQuantizer) Add(vectors []float32) error
Add adds vectors to the index
func (*IndexIVFScalarQuantizer) Close ¶
func (idx *IndexIVFScalarQuantizer) Close() error
Close frees the index
func (*IndexIVFScalarQuantizer) CompressionRatio ¶
func (idx *IndexIVFScalarQuantizer) CompressionRatio() float64
CompressionRatio returns the compression ratio achieved by scalar quantization
func (*IndexIVFScalarQuantizer) D ¶
func (idx *IndexIVFScalarQuantizer) D() int
D returns the dimension of the index
func (*IndexIVFScalarQuantizer) IsTrained ¶
func (idx *IndexIVFScalarQuantizer) IsTrained() bool
IsTrained returns whether the index has been trained
func (*IndexIVFScalarQuantizer) MetricType ¶
func (idx *IndexIVFScalarQuantizer) MetricType() MetricType
MetricType returns the distance metric used
func (*IndexIVFScalarQuantizer) Nlist ¶
func (idx *IndexIVFScalarQuantizer) Nlist() int
Nlist returns the number of clusters
func (*IndexIVFScalarQuantizer) Nprobe ¶
func (idx *IndexIVFScalarQuantizer) Nprobe() int
Nprobe returns the number of clusters to probe during search
func (*IndexIVFScalarQuantizer) Ntotal ¶
func (idx *IndexIVFScalarQuantizer) Ntotal() int64
Ntotal returns the number of vectors in the index
func (*IndexIVFScalarQuantizer) QuantizerType ¶
func (idx *IndexIVFScalarQuantizer) QuantizerType() QuantizerType
QuantizerType returns the quantizer type
func (*IndexIVFScalarQuantizer) Reset ¶
func (idx *IndexIVFScalarQuantizer) Reset() error
Reset removes all vectors from the index
func (*IndexIVFScalarQuantizer) Search ¶
func (idx *IndexIVFScalarQuantizer) Search(queries []float32, k int) (distances []float32, indices []int64, err error)
Search performs k-NN search
func (*IndexIVFScalarQuantizer) SetEfSearch ¶
func (idx *IndexIVFScalarQuantizer) SetEfSearch(efSearch int) error
SetEfSearch is not supported for IVF scalar quantizer indexes (not an HNSW index)
func (*IndexIVFScalarQuantizer) SetNprobe ¶
func (idx *IndexIVFScalarQuantizer) SetNprobe(nprobe int) error
SetNprobe sets the number of clusters to probe during search
func (*IndexIVFScalarQuantizer) Train ¶
func (idx *IndexIVFScalarQuantizer) Train(vectors []float32) error
Train trains the index on the given vectors
type IndexLSH ¶
type IndexLSH struct {
// contains filtered or unexported fields
}
IndexLSH performs locality-sensitive hashing on float vectors
Python equivalent: faiss.IndexLSH
Example:
// 128-dim vectors, 256 hash bits index, _ := faiss.NewIndexLSH(128, 256) index.Add(vectors) distances, indices, _ := index.Search(queries, 10)
Note: LSH is an approximate method that's fast but less accurate than other methods. Best for very high-dimensional data or when speed is critical.
func NewIndexLSH ¶
NewIndexLSH creates a new LSH index d is the vector dimension nbits is the number of hash bits (higher = more accurate but slower)
func NewIndexLSHWithRotation ¶
NewIndexLSHWithRotation creates an LSH index with random rotation Random rotation can improve hash quality for some datasets
func (*IndexLSH) IsTrained ¶
IsTrained returns whether the index has been trained (always true for LSH)
func (*IndexLSH) MetricType ¶
func (idx *IndexLSH) MetricType() MetricType
MetricType returns the distance metric (LSH uses Hamming on hash codes)
func (*IndexLSH) Search ¶
func (idx *IndexLSH) Search(queries []float32, k int) (distances []float32, indices []int64, err error)
Search performs k-NN search using LSH
func (*IndexLSH) SetEfSearch ¶
SetEfSearch is not supported for LSH indexes (not an HNSW index)
type IndexPreTransform ¶
type IndexPreTransform struct {
// contains filtered or unexported fields
}
IndexPreTransform applies a transformation before indexing
Python equivalent: faiss.IndexPreTransform
Example:
// PCA reduction then indexing pca, _ := faiss.NewPCAMatrix(256, 64) baseIndex, _ := faiss.NewIndexFlatL2(64) index, _ := faiss.NewIndexPreTransform(pca, baseIndex) // Training trains both PCA and index index.Train(trainingVectors) // Vectors are automatically reduced before indexing index.Add(vectors)
func NewIndexPreTransform ¶
func NewIndexPreTransform(transform VectorTransform, index Index) (*IndexPreTransform, error)
NewIndexPreTransform creates a new index with preprocessing
func (*IndexPreTransform) Add ¶
func (idx *IndexPreTransform) Add(vectors []float32) error
Add adds vectors after applying transformation (FAISS handles transformation internally)
func (*IndexPreTransform) D ¶
func (idx *IndexPreTransform) D() int
D returns the input dimension (before transformation)
func (*IndexPreTransform) IsTrained ¶
func (idx *IndexPreTransform) IsTrained() bool
IsTrained returns whether both transform and index are trained
func (*IndexPreTransform) MetricType ¶
func (idx *IndexPreTransform) MetricType() MetricType
MetricType returns the distance metric used
func (*IndexPreTransform) Ntotal ¶
func (idx *IndexPreTransform) Ntotal() int64
Ntotal returns the number of vectors in the index
func (*IndexPreTransform) Reset ¶
func (idx *IndexPreTransform) Reset() error
Reset removes all vectors from the underlying index
func (*IndexPreTransform) Search ¶
func (idx *IndexPreTransform) Search(queries []float32, k int) (distances []float32, indices []int64, err error)
Search searches after applying transformation to queries (FAISS handles transformation internally)
func (*IndexPreTransform) SetEfSearch ¶
func (idx *IndexPreTransform) SetEfSearch(efSearch int) error
SetEfSearch delegates to the underlying index if it supports it
func (*IndexPreTransform) SetNprobe ¶
func (idx *IndexPreTransform) SetNprobe(nprobe int) error
SetNprobe delegates to the underlying index if it supports it
func (*IndexPreTransform) Train ¶
func (idx *IndexPreTransform) Train(vectors []float32) error
Train trains the IndexPreTransform (which internally trains transform and index)
type IndexRefine ¶
type IndexRefine struct {
// contains filtered or unexported fields
}
IndexRefine performs a two-stage search: coarse with base_index, refine with refine_index
Python equivalent: faiss.IndexRefine
Example:
// Fast coarse search with IVF baseIndex, _ := faiss.NewIndexIVFFlat(quantizer, 128, 100, faiss.MetricL2) // Exact refinement with flat index refineIndex, _ := faiss.NewIndexFlatL2(128) // Combine index, _ := faiss.NewIndexRefine(baseIndex, refineIndex) index.SetK_factor(4) // search 4x candidates in base, refine to k
func NewIndexRefine ¶
func NewIndexRefine(baseIndex, refineIndex Index) (*IndexRefine, error)
NewIndexRefine creates a new refine index
func (*IndexRefine) Add ¶
func (idx *IndexRefine) Add(vectors []float32) error
Add adds vectors through IndexRefine (which delegates to both child indexes)
func (*IndexRefine) IsTrained ¶
func (idx *IndexRefine) IsTrained() bool
IsTrained returns whether the index has been trained
func (*IndexRefine) MetricType ¶
func (idx *IndexRefine) MetricType() MetricType
MetricType returns the distance metric used
func (*IndexRefine) Ntotal ¶
func (idx *IndexRefine) Ntotal() int64
Ntotal returns the number of vectors in the index
func (*IndexRefine) Reset ¶
func (idx *IndexRefine) Reset() error
Reset removes all vectors from both indexes
func (*IndexRefine) Search ¶
func (idx *IndexRefine) Search(queries []float32, k int) (distances []float32, indices []int64, err error)
Search performs two-stage search
func (*IndexRefine) SetEfSearch ¶
func (idx *IndexRefine) SetEfSearch(efSearch int) error
SetEfSearch delegates to the base index if it supports it
func (*IndexRefine) SetK_factor ¶
func (idx *IndexRefine) SetK_factor(kFactor float32) error
SetK_factor sets the factor for base search candidates
func (*IndexRefine) SetNprobe ¶
func (idx *IndexRefine) SetNprobe(nprobe int) error
SetNprobe delegates to the base index if it supports it
func (*IndexRefine) Train ¶
func (idx *IndexRefine) Train(vectors []float32) error
Train trains the IndexRefine (which internally trains both base and refine indexes)
type IndexScalarQuantizer ¶
type IndexScalarQuantizer struct {
// contains filtered or unexported fields
}
IndexScalarQuantizer is a scalar quantization index This index quantizes each dimension independently using scalar quantization, trading accuracy for memory savings.
Python equivalent: faiss.IndexScalarQuantizer
Example:
index, _ := faiss.NewIndexScalarQuantizer(128, faiss.QT_8bit, faiss.MetricL2) index.Train(trainingVectors) index.Add(vectors)
func NewIndexScalarQuantizer ¶
func NewIndexScalarQuantizer(d int, qtype QuantizerType, metric MetricType) (*IndexScalarQuantizer, error)
NewIndexScalarQuantizer creates a new scalar quantizer index
func (*IndexScalarQuantizer) Add ¶
func (idx *IndexScalarQuantizer) Add(vectors []float32) error
Add adds vectors to the index
func (*IndexScalarQuantizer) Close ¶
func (idx *IndexScalarQuantizer) Close() error
Close frees the index
func (*IndexScalarQuantizer) CompressionRatio ¶
func (idx *IndexScalarQuantizer) CompressionRatio() float64
CompressionRatio returns the compression ratio achieved by scalar quantization
func (*IndexScalarQuantizer) D ¶
func (idx *IndexScalarQuantizer) D() int
D returns the dimension of the index
func (*IndexScalarQuantizer) IsTrained ¶
func (idx *IndexScalarQuantizer) IsTrained() bool
IsTrained returns whether the index has been trained
func (*IndexScalarQuantizer) MetricType ¶
func (idx *IndexScalarQuantizer) MetricType() MetricType
MetricType returns the distance metric used
func (*IndexScalarQuantizer) Ntotal ¶
func (idx *IndexScalarQuantizer) Ntotal() int64
Ntotal returns the number of vectors in the index
func (*IndexScalarQuantizer) QuantizerType ¶
func (idx *IndexScalarQuantizer) QuantizerType() QuantizerType
QuantizerType returns the quantizer type
func (*IndexScalarQuantizer) Reset ¶
func (idx *IndexScalarQuantizer) Reset() error
Reset removes all vectors from the index
func (*IndexScalarQuantizer) Search ¶
func (idx *IndexScalarQuantizer) Search(queries []float32, k int) (distances []float32, indices []int64, err error)
Search performs k-NN search
func (*IndexScalarQuantizer) SetEfSearch ¶
func (idx *IndexScalarQuantizer) SetEfSearch(efSearch int) error
SetEfSearch is not supported for scalar quantizer indexes (not an HNSW index)
func (*IndexScalarQuantizer) SetNprobe ¶
func (idx *IndexScalarQuantizer) SetNprobe(nprobe int) error
SetNprobe is not supported for scalar quantizer indexes (not an IVF index)
func (*IndexScalarQuantizer) Train ¶
func (idx *IndexScalarQuantizer) Train(vectors []float32) error
Train trains the index on the given vectors
type IndexShards ¶
type IndexShards struct {
// contains filtered or unexported fields
}
IndexShards distributes vectors across multiple sub-indexes
Python equivalent: faiss.IndexShards
Example:
shards, _ := faiss.NewIndexShards(128, faiss.MetricL2) // Add multiple sub-indexes shard1, _ := faiss.NewIndexFlatL2(128) shard2, _ := faiss.NewIndexFlatL2(128) shards.AddShard(shard1) shards.AddShard(shard2) // Vectors distributed round-robin across shards shards.Add(vectors)
func NewIndexShards ¶
func NewIndexShards(d int, metric MetricType) (*IndexShards, error)
NewIndexShards creates a new sharded index
func (*IndexShards) Add ¶
func (idx *IndexShards) Add(vectors []float32) error
Add distributes vectors across shards
func (*IndexShards) AddShard ¶
func (idx *IndexShards) AddShard(shard Index) error
AddShard adds a sub-index to the shards
func (*IndexShards) Close ¶
func (idx *IndexShards) Close() error
Close frees the index and all child shards
func (*IndexShards) IsTrained ¶
func (idx *IndexShards) IsTrained() bool
IsTrained returns whether all shards are trained
func (*IndexShards) MetricType ¶
func (idx *IndexShards) MetricType() MetricType
MetricType returns the distance metric
func (*IndexShards) Ntotal ¶
func (idx *IndexShards) Ntotal() int64
Ntotal returns the total number of vectors across all shards
func (*IndexShards) Reset ¶
func (idx *IndexShards) Reset() error
Reset removes all vectors from all shards
func (*IndexShards) Search ¶
func (idx *IndexShards) Search(queries []float32, k int) (distances []float32, indices []int64, err error)
Search searches across all shards
func (*IndexShards) SetEfSearch ¶
func (idx *IndexShards) SetEfSearch(efSearch int) error
SetEfSearch sets efSearch on all shards that support it
func (*IndexShards) SetNprobe ¶
func (idx *IndexShards) SetNprobe(nprobe int) error
SetNprobe sets nprobe on all shards that support it
func (*IndexShards) Train ¶
func (idx *IndexShards) Train(vectors []float32) error
Train trains all shards via the IndexShards composite index
type IndexWithAssign ¶
type IndexWithAssign interface {
Index
// Assign vectors to their nearest cluster
Assign(vectors []float32) ([]int64, error)
}
IndexWithAssign extends Index to support assignment (clustering)
type IndexWithIDs ¶
type IndexWithIDs interface {
Index
// Add vectors with custom IDs
AddWithIDs(vectors []float32, ids []int64) error
// Remove vectors by IDs
RemoveIDs(ids []int64) error
}
IndexWithIDs extends Index to support custom IDs
type IndexWithReconstruction ¶
type IndexWithReconstruction interface {
Index
// Reconstruct a single vector by its index
Reconstruct(id int64) ([]float32, error)
// Reconstruct multiple vectors
ReconstructN(start, n int64) ([]float32, error)
// Reconstruct a batch of vectors by their indices
ReconstructBatch(ids []int64) ([]float32, error)
}
IndexWithReconstruction extends Index to support vector reconstruction
type Kmeans ¶
type Kmeans struct {
// contains filtered or unexported fields
}
Kmeans performs k-means clustering on vectors
Python equivalent: faiss.Kmeans
Example:
kmeans, _ := faiss.NewKmeans(128, 100) // d=128, k=100 kmeans.Train(trainingVectors) centroids := kmeans.Centroids()
func NewKmeans ¶
NewKmeans creates a new k-means clustering object
Parameters:
- d: dimension of vectors
- k: number of clusters
func (*Kmeans) Assign ¶
Assign assigns vectors to their nearest cluster using L2 distance
Parameters:
- vectors: vectors to assign (n vectors of dimension d)
Returns: cluster assignments (n integers, each in range [0, k))
func (*Kmeans) Centroids ¶
Centroids returns the cluster centroids (k * d floats) Must call Train() first
type MetricType ¶
type MetricType int
MetricType defines the distance metric used by an index
const ( // MetricInnerProduct uses inner product (higher is more similar) MetricInnerProduct MetricType = 0 // MetricL2 uses L2 (Euclidean) distance (lower is more similar) MetricL2 MetricType = 1 )
func (MetricType) String ¶
func (m MetricType) String() string
String returns the string representation of the metric type
type Metrics ¶
type Metrics struct {
// contains filtered or unexported fields
}
Metrics provides performance tracking for FAISS operations. Enable with EnableMetrics() to start collecting data. Metrics collection has minimal overhead (~50ns per operation when enabled).
func (*Metrics) Snapshot ¶
func (m *Metrics) Snapshot() MetricsSnapshot
Snapshot returns a point-in-time copy of all metrics.
type MetricsSnapshot ¶
type MetricsSnapshot struct {
Train OperationStats
Add OperationStats
Search OperationStats
Reset int64 // Number of reset operations
// Aggregate stats
TotalOperations int64
TotalTime time.Duration
ResultsReturned int64 // Total search results returned
}
MetricsSnapshot contains a point-in-time snapshot of all metrics.
func GetMetrics ¶
func GetMetrics() MetricsSnapshot
GetMetrics returns a snapshot of the current metrics.
type OPQMatrix ¶
type OPQMatrix struct {
M int // number of subspaces
// contains filtered or unexported fields
}
OPQMatrix performs rotation for optimal product quantization
Python equivalent: faiss.OPQMatrix
Example:
// 128-dim vectors, 8 subspaces opq, _ := faiss.NewOPQMatrix(128, 8) opq.Train(trainingVectors) rotated, _ := opq.Apply(vectors)
func NewOPQMatrix ¶
NewOPQMatrix creates a new OPQ transformation matrix
func (*OPQMatrix) ReverseTransform ¶
ReverseTransform reverses the OPQ rotation
type OperationStats ¶
type OperationStats struct {
Count int64 // Number of times operation was called
TotalTime time.Duration // Total time spent in operation
AvgTime time.Duration // Average time per operation
VectorCount int64 // Total vectors processed (for train/add) or queries (for search)
}
OperationStats contains statistics for a single operation type.
func (OperationStats) QPS ¶
func (s OperationStats) QPS() float64
QPS calculates queries per second from a search snapshot.
func (OperationStats) VectorsPerSecond ¶
func (s OperationStats) VectorsPerSecond() float64
VectorsPerSecond calculates throughput for add operations.
type PCAMatrix ¶
type PCAMatrix struct {
// contains filtered or unexported fields
}
PCAMatrix performs dimensionality reduction using PCA
Python equivalent: faiss.PCAMatrix
Example:
// Reduce from 256 to 64 dimensions pca, _ := faiss.NewPCAMatrix(256, 64) pca.Train(trainingVectors) // Apply to vectors reduced, _ := pca.Apply(vectors) // Reconstruct (approximate) original reconstructed, _ := pca.ReverseTransform(reduced)
func NewPCAMatrix ¶
NewPCAMatrix creates a new PCA transformation matrix
func NewPCAMatrixWithEigen ¶
NewPCAMatrixWithEigen creates a PCA matrix that also computes eigenvalues
func (*PCAMatrix) ReverseTransform ¶
ReverseTransform attempts to reverse the PCA transformation (approximate reconstruction)
type QuantizerType ¶
type QuantizerType int
QuantizerType specifies the type of scalar quantization
const ( // QT_8bit uses 8 bits per dimension QT_8bit QuantizerType = 0 // QT_4bit uses 4 bits per dimension QT_4bit QuantizerType = 1 // QT_8bit_uniform uses uniform 8-bit quantization QT_8bit_uniform QuantizerType = 2 // QT_4bit_uniform uses uniform 4-bit quantization QT_4bit_uniform QuantizerType = 3 // QT_fp16 uses 16-bit floating point QT_fp16 QuantizerType = 4 // QT_8bit_direct uses direct 8-bit mapping QT_8bit_direct QuantizerType = 5 // QT_6bit uses 6 bits per dimension QT_6bit QuantizerType = 6 )
type RandomRotationMatrix ¶
type RandomRotationMatrix struct {
// contains filtered or unexported fields
}
RandomRotationMatrix performs a random rotation of vectors
Python equivalent: faiss.RandomRotationMatrix
Example:
rr, _ := faiss.NewRandomRotationMatrix(128, 128) rotated, _ := rr.Apply(vectors)
func NewRandomRotationMatrix ¶
func NewRandomRotationMatrix(dIn, dOut int) (*RandomRotationMatrix, error)
NewRandomRotationMatrix creates a new random rotation matrix
func (*RandomRotationMatrix) Apply ¶
func (rr *RandomRotationMatrix) Apply(vectors []float32) ([]float32, error)
Apply applies the random rotation to vectors
func (*RandomRotationMatrix) Close ¶
func (rr *RandomRotationMatrix) Close() error
Close frees the random rotation matrix
func (*RandomRotationMatrix) DIn ¶
func (rr *RandomRotationMatrix) DIn() int
DIn returns the input dimension
func (*RandomRotationMatrix) DOut ¶
func (rr *RandomRotationMatrix) DOut() int
DOut returns the output dimension
func (*RandomRotationMatrix) IsTrained ¶
func (rr *RandomRotationMatrix) IsTrained() bool
IsTrained returns whether the transform is ready (always true)
func (*RandomRotationMatrix) ReverseTransform ¶
func (rr *RandomRotationMatrix) ReverseTransform(vectors []float32) ([]float32, error)
ReverseTransform reverses the random rotation (if dIn == dOut)
func (*RandomRotationMatrix) Train ¶
func (rr *RandomRotationMatrix) Train(vectors []float32) error
Train initializes the random rotation matrix. Unlike PCA, RandomRotation doesn't learn from data, but the C library requires train() to be called to initialize the rotation matrix.
type RangeSearchResult ¶
type RangeSearchResult struct {
Nq int // Number of queries
Lims []int64 // Offsets: lims[i] to lims[i+1] are results for query i
Labels []int64 // Labels of results
Distances []float32 // Distances of results
}
RangeSearchResult contains the results of a range search For each query, it returns all vectors within the specified radius
func (*RangeSearchResult) GetResults ¶
func (rsr *RangeSearchResult) GetResults(queryIdx int) (labels []int64, distances []float32)
GetResults returns the results for a specific query
func (*RangeSearchResult) NumResults ¶
func (rsr *RangeSearchResult) NumResults(queryIdx int) int
NumResults returns the number of results for a specific query
func (*RangeSearchResult) TotalResults ¶
func (rsr *RangeSearchResult) TotalResults() int
TotalResults returns the total number of results across all queries
type SearchResult ¶
type SearchResult struct {
Distances []float32
Labels []int64
Nq int // Number of queries
K int // Number of neighbors per query
}
SearchResult is a structured search result
func NewSearchResult ¶
func NewSearchResult(distances []float32, labels []int64, nq, k int) *SearchResult
NewSearchResult creates a SearchResult from raw arrays
func (*SearchResult) Get ¶
func (sr *SearchResult) Get(i, j int) (distance float32, label int64)
Get returns the distance and label for query i, neighbor j
func (*SearchResult) GetNeighbors ¶
func (sr *SearchResult) GetNeighbors(i int) (distances []float32, labels []int64)
GetNeighbors returns all neighbors for query i
type Timer ¶
type Timer struct {
// contains filtered or unexported fields
}
Timer is a helper for timing operations. Usage:
timer := StartTimer() // ... do work ... timer.RecordTrain(nVectors)
func (Timer) RecordSearch ¶
RecordSearch records the elapsed time as a search operation.
func (Timer) RecordTrain ¶
RecordTrain records the elapsed time as a training operation.
type VectorStats ¶
type VectorStats struct {
N int // Number of vectors
D int // Dimension
MinNorm float32 // Minimum L2 norm
MaxNorm float32 // Maximum L2 norm
MeanNorm float32 // Mean L2 norm
MinValue float32 // Minimum component value
MaxValue float32 // Maximum component value
MeanValue float32 // Mean component value
}
VectorStats computes statistics about a set of vectors
func ComputeVectorStats ¶
func ComputeVectorStats(vectors []float32, d int) (*VectorStats, error)
ComputeVectorStats computes statistics for a set of vectors
type VectorTransform ¶
type VectorTransform interface {
DIn() int // input dimension
DOut() int // output dimension
IsTrained() bool
Train(vectors []float32) error
Apply(vectors []float32) ([]float32, error)
ReverseTransform(vectors []float32) ([]float32, error)
Close() error
}
VectorTransform is an interface for vector transformations (PCA, OPQ, etc.)
Source Files
¶
Directories
¶
| Path | Synopsis |
|---|---|
|
examples
|
|
|
01_basic_search
command
|
|
|
02_ivf_clustering
command
|
|
|
03_hnsw_graph
command
|
|
|
04_pq_compression
command
|
|
|
06_pretransform
command
|
|
|
test
|
|