gorilla

package module
v0.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 16, 2025 License: Apache-2.0, MIT Imports: 12 Imported by: 0

README

Gorilla 🦍

Go CI

Gorilla is a high-performance, in-memory DataFrame library for Go. Inspired by libraries like Pandas and Polars, it is designed for fast and efficient data manipulation.

At its core, Gorilla is built on Apache Arrow. This allows it to provide excellent performance and memory efficiency by leveraging Arrow's columnar data format.

✨ Key Features

  • Built on Apache Arrow: Utilizes the Arrow columnar memory format for zero-copy data access and efficient computation.
  • Lazy Evaluation & Query Optimization: Operations aren't executed immediately. Gorilla builds an optimized query plan with predicate pushdown, filter fusion, and join optimization.
  • Automatic Parallelization: Intelligent parallel processing for datasets >1000 rows with adaptive worker pools and thread-safe operations.
  • Comprehensive I/O Operations: Native CSV read/write with automatic type inference, configurable delimiters, and robust error handling.
  • Advanced Join Operations: Inner, left, right, and full outer joins with multi-key support and automatic optimization strategies.
  • GroupBy & Aggregations: Efficient grouping with Sum, Count, Mean, Min, Max aggregations and parallel execution for large datasets.
  • Streaming & Large Dataset Processing: Handle datasets larger than memory with chunk-based processing and automatic disk spilling.
  • Debug & Profiling: Built-in execution plan visualization, performance profiling, and bottleneck identification with .Debug() mode.
  • Flexible Configuration: Multi-format config support (JSON/YAML/env) with performance tuning, memory management, and optimization controls.
  • Rich Data Types: Support for string, bool, int64, int32, int16, int8, uint64, uint32, uint16, uint8, float64, float32 with automatic type coercion.
  • Expressive API: Chainable operations for filtering, selecting, transforming, sorting, and complex data manipulation.
  • Core Data Structures:
    • Series: A 1-dimensional array of data, representing a single column.
    • DataFrame: A 2-dimensional table-like structure, composed of multiple Series.
  • CLI Tool: gorilla-cli for benchmarking, demos, and performance testing.

🚀 Getting Started

Installation

To add Gorilla to your project, use go get:

go get github.com/paveg/gorilla
Memory Management

Gorilla is built on Apache Arrow, which requires explicit memory management. We strongly recommend using the defer pattern for most use cases as it provides better readability and prevents memory leaks.

// Create resources and immediately defer their cleanup
mem := memory.NewGoAllocator()
df := gorilla.NewDataFrame(series1, series2)
defer df.Release() // ← Clear resource lifecycle

result, err := df.Lazy().Filter(...).Collect()
defer result.Release() // ← Always clean up results
📋 When to Use MemoryManager

For complex scenarios with many short-lived resources, you can use MemoryManager:

err := gorilla.WithMemoryManager(mem, func(manager *gorilla.MemoryManager) error {
    // Create multiple temporary resources
    for i := 0; i < 100; i++ {
        temp := createTempDataFrame(i)
        manager.Track(temp) // Bulk cleanup at end
    }
    return processData()
})
// All tracked resources automatically released
Quick Example

Here is a quick example to demonstrate the basic usage of Gorilla.

package main

import (
	"fmt"
	"log"

	"github.com/apache/arrow-go/v18/arrow/memory"
	"github.com/paveg/gorilla"
)

func main() {
	// Gorilla uses Apache Arrow for memory management.
	// Always start by creating an allocator.
	mem := memory.NewGoAllocator()

	// 1. Create some Series (columns)
	names := []string{"Alice", "Bob", "Charlie", "Diana", "Eve"}
	ages := []int64{25, 30, 35, 28, 32}
	salaries := []float64{50000.0, 60000.0, 75000.0, 55000.0, 65000.0}
	nameSeries := gorilla.NewSeries("name", names, mem)
	ageSeries := gorilla.NewSeries("age", ages, mem)
	salarySeries := gorilla.NewSeries("salary", salaries, mem)

	// Remember to release the memory when you're done.
	defer nameSeries.Release()
	defer ageSeries.Release()
	defer salarySeries.Release()

	// 2. Create a DataFrame from the Series
	df := gorilla.NewDataFrame(nameSeries, ageSeries, salarySeries)
	defer df.Release()

	fmt.Println("Original DataFrame:")
	fmt.Println(df)

	// 3. Use Lazy Evaluation for powerful transformations
	lazyDf := df.Lazy().
		// Filter for rows where age is greater than 30
		Filter(gorilla.Col("age").Gt(gorilla.Lit(int64(30)))).
		// Add a new column 'bonus'
		WithColumn("bonus", gorilla.Col("salary").Mul(gorilla.Lit(0.1))).
		// Select the columns we want in the final result
		Select("name", "age", "bonus")

	// 4. Execute the plan and collect the results
	result, err := lazyDf.Collect()
	if err != nil {
		log.Fatal(err)
	}
	defer result.Release()
	defer lazyDf.Release()

	fmt.Println("Result after lazy evaluation:")
	fmt.Println(result)
}

📊 Advanced Features

I/O Operations

Gorilla provides comprehensive CSV I/O capabilities with automatic type inference:

import "github.com/paveg/gorilla/internal/io"

// Reading CSV with automatic type inference
mem := memory.NewGoAllocator()
csvData := `name,age,salary,active
Alice,25,50000.5,true
Bob,30,60000.0,false`

reader := io.NewCSVReader(strings.NewReader(csvData), io.DefaultCSVOptions(), mem)
df, err := reader.Read()
defer df.Release()

// Writing DataFrame to CSV
var output strings.Builder
writer := io.NewCSVWriter(&output, io.DefaultCSVOptions())
err = writer.Write(df)
Join Operations

Perform SQL-like joins between DataFrames:

// Inner join on multiple keys
result, err := left.Join(right, gorilla.JoinOptions{
    Type: gorilla.InnerJoin,
    LeftKeys: []string{"id", "dept"},
    RightKeys: []string{"user_id", "department"},
})
defer result.Release()
GroupBy and Aggregations

Efficient grouping and aggregation operations:

// Group by department and calculate aggregations
result, err := df.Lazy().
    GroupBy("department").
    Agg(
        gorilla.Sum(gorilla.Col("salary")).As("total_salary"),
        gorilla.Count(gorilla.Col("*")).As("employee_count"),
        gorilla.Mean(gorilla.Col("age")).As("avg_age"),
    ).
    Collect()
defer result.Release()
Debug and Profiling

Monitor performance and analyze query execution:

// Enable debug mode for detailed execution analysis
debugDf := df.Debug()
result, err := debugDf.Lazy().
    Filter(gorilla.Col("salary").Gt(gorilla.Lit(50000))).
    Sort("name", true).
    Collect()

// Get execution plan and performance metrics
plan := debugDf.GetExecutionPlan()
fmt.Println("Execution plan:", plan)
Configuration

Customize performance and behavior with flexible configuration:

// Load configuration from file or environment
config, err := gorilla.LoadConfig("gorilla.yaml")
df := gorilla.NewDataFrame(series...).WithConfig(config.Operations)

// Or configure programmatically
config := gorilla.OperationConfig{
    ParallelThreshold: 2000,
    WorkerPoolSize: 8,
    EnableQueryOptimization: true,
    TrackMemory: true,
}

Example configuration file (gorilla.yaml):

# Parallel Processing
parallel_threshold: 2000
worker_pool_size: 8
chunk_size: 1000

# Memory Management  
memory_threshold: 1073741824  # 1GB
gc_pressure_threshold: 0.75

# Query Optimization
filter_fusion: true
predicate_pushdown: true
join_optimization: true

# Debugging
enable_profiling: false
verbose_logging: false
metrics_collection: true
Streaming Large Datasets

Process datasets larger than memory:

processor := gorilla.NewStreamingProcessor(gorilla.StreamingConfig{
    ChunkSize: 10000,
    MaxMemoryMB: 1024,
})

err := processor.ProcessLargeCSV("large_dataset.csv", func(chunk *gorilla.DataFrame) error {
    // Process each chunk
    result := chunk.Filter(gorilla.Col("score").Gt(gorilla.Lit(0.8)))
    return saveResults(result)
})

🔧 CLI Tool

Gorilla includes a command-line tool for benchmarking and demonstrations:

# Build the CLI
make build

# Run interactive demo
./bin/gorilla-cli demo

# Run benchmarks
./bin/gorilla-cli benchmark --rows 100000

⚡ Performance

Gorilla is designed for high-performance data processing with several optimization features:

Benchmarks

Recent performance benchmarks show excellent performance characteristics:

  • CSV I/O: ~52μs for 100 rows, ~5.3ms for 10,000 rows
  • Parallel Processing: Automatic scaling with adaptive worker pools
  • Memory Efficiency: Zero-copy operations with Apache Arrow columnar format
  • Query Optimization: 20-50% performance improvement with predicate pushdown and filter fusion
Performance Features
  • Lazy Evaluation: Build optimized execution plans before processing
  • Automatic Parallelization: Intelligent parallel processing for large datasets
  • Memory Management: Efficient Arrow-based columnar storage with GC pressure monitoring
  • Query Optimization: Predicate pushdown, filter fusion, and join optimization
  • Streaming: Process datasets larger than memory with configurable chunk sizes

Run benchmarks locally:

# Build and run performance tests
make build
./bin/gorilla-cli benchmark

# Run specific package benchmarks
go test -bench=. ./internal/dataframe
go test -bench=. ./internal/io

🤝 Contributing

Contributions are welcome! Please feel free to open an issue or submit a pull request.

To run the test suite, use the standard Go command:

go test ./...

📄 License

This project is licensed under the MIT License.

Documentation

Overview

Package gorilla provides a high-performance, in-memory DataFrame library for Go.

Gorilla is built on Apache Arrow and provides fast, efficient data manipulation with lazy evaluation and automatic parallelization. It offers a clear, intuitive API for filtering, selecting, transforming, and analyzing tabular data.

Core Concepts

DataFrame: A 2-dimensional table of data with named columns (Series). Series: A 1-dimensional array of homogeneous data representing a single column. LazyFrame: Deferred computation that builds an optimized query plan. Expression: Type-safe operations for filtering, transforming, and aggregating data.

Memory Management

Gorilla uses Apache Arrow's memory management. Always call Release() on DataFrames and Series to prevent memory leaks. The recommended pattern is:

df := gorilla.NewDataFrame(series1, series2)
defer df.Release() // Essential for proper cleanup

Basic Usage

package main

import (
	"fmt"
	"github.com/apache/arrow-go/v18/arrow/memory"
	"github.com/paveg/gorilla"
)

func main() {
	mem := memory.NewGoAllocator()

	// Create Series (columns)
	names := gorilla.NewSeries("name", []string{"Alice", "Bob", "Charlie"}, mem)
	ages := gorilla.NewSeries("age", []int64{25, 30, 35}, mem)
	defer names.Release()
	defer ages.Release()

	// Create DataFrame
	df := gorilla.NewDataFrame(names, ages)
	defer df.Release()

	// Lazy evaluation with method chaining
	result, err := df.Lazy().
		Filter(gorilla.Col("age").Gt(gorilla.Lit(int64(30)))).
		Select("name").
		Collect()
	if err != nil {
		panic(err)
	}
	defer result.Release()

	fmt.Println(result)
}

Performance Features

- Zero-copy operations using Apache Arrow columnar format - Automatic parallelization for DataFrames with 1000+ rows - Lazy evaluation with query optimization - Efficient join algorithms with automatic strategy selection - Memory-efficient aggregations and transformations

This package is the sole public API for the library.

Index

Constants

View Source
const (
	// DefaultCheckInterval is the default interval for memory monitoring
	DefaultCheckInterval = 5 * time.Second
	// HighMemoryPressureThreshold is the threshold for triggering cleanup
	HighMemoryPressureThreshold = 0.8
)
View Source
const (
	// DefaultChunkSize is the default size for processing chunks
	DefaultChunkSize = 1000
	// BytesPerValue is the estimated bytes per DataFrame value
	BytesPerValue = 8
)

Variables

This section is empty.

Functions

func Case

func Case() *expr.CaseExpr

Case starts a case expression.

Returns a concrete *expr.CaseExpr for optimal performance and type safety.

func Coalesce

func Coalesce(exprs ...expr.Expr) *expr.FunctionExpr

Coalesce returns the first non-null expression.

Returns a concrete *expr.FunctionExpr for optimal performance and type safety.

func Col

func Col(name string) *expr.ColumnExpr

Col creates a column expression that references a column by name.

This is the primary way to reference columns in filters, selections, and other DataFrame operations. The column name must exist in the DataFrame when the expression is evaluated.

Returns a concrete *expr.ColumnExpr for optimal performance and type safety.

Example:

// Reference the "age" column
ageCol := gorilla.Col("age")

// Use in filters and operations
result, err := df.Lazy().
	Filter(gorilla.Col("age").Gt(gorilla.Lit(30))).
	Select(gorilla.Col("name"), gorilla.Col("age")).
	Collect()

func Concat

func Concat(exprs ...expr.Expr) *expr.FunctionExpr

Concat concatenates string expressions.

Returns a concrete *expr.FunctionExpr for optimal performance and type safety.

func Count

func Count(column expr.Expr) *expr.AggregationExpr

Count returns an aggregation expression for count.

Returns a concrete *expr.AggregationExpr for optimal performance and type safety.

func If

func If(condition, thenValue, elseValue expr.Expr) *expr.FunctionExpr

If returns a conditional expression.

Returns a concrete *expr.FunctionExpr for optimal performance and type safety.

func Lit

func Lit(value interface{}) *expr.LiteralExpr

Lit creates a literal expression that represents a constant value.

This is used to create expressions with constant values for comparisons, arithmetic operations, and other transformations. The value type should match the expected operation type.

Supported types: string, int64, int32, float64, float32, bool

Returns a concrete *expr.LiteralExpr for optimal performance and type safety.

Example:

// Literal values for comparisons
age30 := gorilla.Lit(int64(30))
name := gorilla.Lit("Alice")
score := gorilla.Lit(95.5)
active := gorilla.Lit(true)

// Use in operations
result, err := df.Lazy().
	Filter(gorilla.Col("age").Gt(gorilla.Lit(int64(25)))).
	WithColumn("bonus", gorilla.Col("salary").Mul(gorilla.Lit(0.1))).
	Collect()

func Max

func Max(column expr.Expr) *expr.AggregationExpr

Max returns an aggregation expression for max.

Returns a concrete *expr.AggregationExpr for optimal performance and type safety.

func Mean

func Mean(column expr.Expr) *expr.AggregationExpr

Mean returns an aggregation expression for mean.

Returns a concrete *expr.AggregationExpr for optimal performance and type safety.

func Min

func Min(column expr.Expr) *expr.AggregationExpr

Min returns an aggregation expression for min.

Returns a concrete *expr.AggregationExpr for optimal performance and type safety.

func Sum

func Sum(column expr.Expr) *expr.AggregationExpr

Sum returns an aggregation expression for sum.

Returns a concrete *expr.AggregationExpr for optimal performance and type safety.

func WithDataFrame

func WithDataFrame(factory func() *DataFrame, fn func(*DataFrame) error) error

WithDataFrame provides automatic resource management for DataFrame operations.

This helper function creates a DataFrame using the provided factory function, executes the given operation, and automatically releases the DataFrame when done. This pattern is useful for operations where you want guaranteed cleanup.

The factory function should create and return a DataFrame. The operation function receives the DataFrame and performs the desired operations. Any error from the operation function is returned to the caller.

Example:

err := gorilla.WithDataFrame(func() *gorilla.DataFrame {
	mem := memory.NewGoAllocator()
	series1 := gorilla.NewSeries("name", []string{"Alice", "Bob"}, mem)
	series2 := gorilla.NewSeries("age", []int64{25, 30}, mem)
	return gorilla.NewDataFrame(series1, series2)
}, func(df *gorilla.DataFrame) error {
	result, err := df.Lazy().Filter(gorilla.Col("age").Gt(gorilla.Lit(25))).Collect()
	if err != nil {
		return err
	}
	defer result.Release()
	fmt.Println(result)
	return nil
})
// DataFrame is automatically released here

func WithMemoryManager

func WithMemoryManager(allocator memory.Allocator, fn func(*MemoryManager) error) error

WithMemoryManager creates a memory manager, executes a function with it, and releases all tracked resources

func WithSeries

func WithSeries(factory func() ISeries, fn func(ISeries) error) error

WithSeries creates a Series, executes a function with it, and automatically releases it

Types

type BatchManager

type BatchManager struct {
	// contains filtered or unexported fields
}

BatchManager manages a collection of spillable batches

func NewBatchManager

func NewBatchManager(monitor *MemoryUsageMonitor) *BatchManager

NewBatchManager creates a new batch manager

func (*BatchManager) AddBatch

func (bm *BatchManager) AddBatch(data *DataFrame)

AddBatch adds a new batch to the manager

func (*BatchManager) ReleaseAll

func (bm *BatchManager) ReleaseAll()

ReleaseAll releases all batches and clears the manager

func (*BatchManager) SpillLRU

func (bm *BatchManager) SpillLRU() error

SpillLRU spills the least recently used batch to free memory

type ChunkReader

type ChunkReader interface {
	// ReadChunk reads the next chunk of data
	ReadChunk() (*DataFrame, error)
	// HasNext returns true if there are more chunks to read
	HasNext() bool
	// Close closes the chunk reader and releases resources
	Close() error
}

ChunkReader represents a source of data chunks for streaming processing

type ChunkWriter

type ChunkWriter interface {
	// WriteChunk writes a processed chunk of data
	WriteChunk(*DataFrame) error
	// Close closes the chunk writer and releases resources
	Close() error
}

ChunkWriter represents a destination for processed data chunks

type DataFrame

type DataFrame struct {
	// contains filtered or unexported fields
}

DataFrame represents a 2-dimensional table of data with named columns.

A DataFrame is composed of multiple Series (columns) and provides operations for filtering, selecting, transforming, joining, and aggregating data. It supports both eager and lazy evaluation patterns.

Key features:

  • Zero-copy operations using Apache Arrow columnar format
  • Automatic parallelization for large datasets (1000+ rows)
  • Type-safe operations through the Expression system
  • Memory-efficient storage and computation

Memory management: DataFrames must be released to prevent memory leaks:

df := gorilla.NewDataFrame(series1, series2)
defer df.Release()

Operations can be performed eagerly or using lazy evaluation:

// Eager: operations execute immediately
filtered := df.Filter(gorilla.Col("age").Gt(gorilla.Lit(30)))

// Lazy: operations build a query plan, execute on Collect()
result, err := df.Lazy().
	Filter(gorilla.Col("age").Gt(gorilla.Lit(30))).
	Select("name", "age").
	Collect()

func NewDataFrame

func NewDataFrame(series ...ISeries) *DataFrame

NewDataFrame creates a new DataFrame from one or more Series.

A DataFrame is a 2-dimensional table with named columns. Each Series becomes a column in the DataFrame. All Series must have the same length.

Memory management: The returned DataFrame must be released by calling Release() to prevent memory leaks. Use defer for automatic cleanup:

df := gorilla.NewDataFrame(series1, series2)
defer df.Release()

Example:

mem := memory.NewGoAllocator()
names := gorilla.NewSeries("name", []string{"Alice", "Bob"}, mem)
ages := gorilla.NewSeries("age", []int64{25, 30}, mem)
defer names.Release()
defer ages.Release()

df := gorilla.NewDataFrame(names, ages)
defer df.Release()

fmt.Println(df) // Displays the DataFrame

func (*DataFrame) Column

func (d *DataFrame) Column(name string) (ISeries, bool)

Column returns the column with the given name.

func (*DataFrame) Columns

func (d *DataFrame) Columns() []string

Columns returns the column names in order.

func (*DataFrame) Concat

func (d *DataFrame) Concat(others ...*DataFrame) *DataFrame

Concat concatenates this DataFrame with others.

func (*DataFrame) Drop

func (d *DataFrame) Drop(names ...string) *DataFrame

Drop returns a new DataFrame without the specified columns.

func (*DataFrame) GroupBy

func (d *DataFrame) GroupBy(columns ...string) *GroupBy

GroupBy creates a GroupBy operation for eager evaluation.

func (*DataFrame) HasColumn

func (d *DataFrame) HasColumn(name string) bool

HasColumn returns true if the DataFrame has the given column.

func (*DataFrame) Join

func (d *DataFrame) Join(right *DataFrame, options *JoinOptions) (*DataFrame, error)

Join performs a join operation with another DataFrame.

func (*DataFrame) Lazy

func (d *DataFrame) Lazy() *LazyFrame

Lazy initiates a lazy operation on a DataFrame.

func (*DataFrame) Len

func (d *DataFrame) Len() int

Len returns the number of rows.

func (*DataFrame) Release

func (d *DataFrame) Release()

Release frees the memory used by the DataFrame.

func (*DataFrame) Select

func (d *DataFrame) Select(names ...string) *DataFrame

Select returns a new DataFrame with only the specified columns.

func (*DataFrame) Slice

func (d *DataFrame) Slice(start, end int) *DataFrame

Slice returns a new DataFrame with rows from start to end (exclusive).

func (*DataFrame) Sort

func (d *DataFrame) Sort(column string, ascending bool) (*DataFrame, error)

Sort returns a new DataFrame sorted by the specified column.

func (*DataFrame) SortBy

func (d *DataFrame) SortBy(columns []string, ascending []bool) (*DataFrame, error)

SortBy returns a new DataFrame sorted by multiple columns.

func (*DataFrame) String

func (d *DataFrame) String() string

String returns a string representation of the DataFrame.

func (*DataFrame) Width

func (d *DataFrame) Width() int

Width returns the number of columns.

type GroupBy

type GroupBy struct {
	// contains filtered or unexported fields
}

GroupBy is the public type for eager group by operations.

func (*GroupBy) Agg

func (gb *GroupBy) Agg(aggregations ...*expr.AggregationExpr) *DataFrame

Agg performs aggregation operations on the GroupBy.

Takes aggregation expressions directly without wrapper overhead.

type ISeries

type ISeries interface {
	Name() string             // Returns the name of the Series
	Len() int                 // Returns the number of elements
	DataType() arrow.DataType // Returns the Apache Arrow data type
	IsNull(index int) bool    // Checks if the value at index is null
	String() string           // Returns a string representation
	Array() arrow.Array       // Returns the underlying Arrow array
	Release()                 // Releases memory resources
}

ISeries provides a type-erased interface for Series of any supported type.

ISeries allows different typed Series to be used together in DataFrames and operations. It wraps Apache Arrow arrays and provides common operations for all Series types.

Supported data types: string, int64, int32, float64, float32, bool

Memory management: Series implement the Release() method and must be released to prevent memory leaks:

series := gorilla.NewSeries("name", []string{"Alice", "Bob"}, mem)
defer series.Release()

The interface provides:

  • Name() - column name for use in DataFrames
  • Len() - number of elements in the Series
  • DataType() - Apache Arrow data type information
  • IsNull(index) - null value checking
  • String() - human-readable representation
  • Array() - access to underlying Arrow array
  • Release() - memory cleanup

func NewSeries

func NewSeries[T any](name string, values []T, mem memory.Allocator) ISeries

NewSeries creates a new typed Series from a slice of values.

A Series is a 1-dimensional array of homogeneous data that represents a single column in a DataFrame. The type parameter T determines the data type of the Series.

Supported types: string, int64, int32, float64, float32, bool

Parameters:

  • name: The name of the Series (becomes the column name in a DataFrame)
  • values: Slice of values to populate the Series
  • mem: Apache Arrow memory allocator for managing the underlying storage

Memory management: The returned Series must be released by calling Release() to prevent memory leaks. Use defer for automatic cleanup:

series := gorilla.NewSeries("age", []int64{25, 30, 35}, mem)
defer series.Release()

Example:

mem := memory.NewGoAllocator()

// Create different types of Series
names := gorilla.NewSeries("name", []string{"Alice", "Bob", "Charlie"}, mem)
ages := gorilla.NewSeries("age", []int64{25, 30, 35}, mem)
scores := gorilla.NewSeries("score", []float64{95.5, 87.2, 92.1}, mem)
active := gorilla.NewSeries("active", []bool{true, false, true}, mem)

defer names.Release()
defer ages.Release()
defer scores.Release()
defer active.Release()

type JoinOptions

type JoinOptions struct {
	Type      JoinType
	LeftKey   string   // Single join key for left DataFrame
	RightKey  string   // Single join key for right DataFrame
	LeftKeys  []string // Multiple join keys for left DataFrame
	RightKeys []string // Multiple join keys for right DataFrame
}

JoinOptions specifies parameters for join operations

type JoinType

type JoinType int

JoinType represents the type of join operation

const (
	InnerJoin JoinType = iota
	LeftJoin
	RightJoin
	FullOuterJoin
)

type LazyFrame

type LazyFrame struct {
	// contains filtered or unexported fields
}

LazyFrame provides deferred computation with query optimization.

LazyFrame builds an Abstract Syntax Tree (AST) of operations without executing them immediately. This allows for query optimization, operation fusion, and efficient memory usage. Operations are only executed when Collect() is called.

Benefits of lazy evaluation:

  • Query optimization (predicate pushdown, operation fusion)
  • Memory efficiency (only final results allocated)
  • Parallel execution planning
  • Operation chaining with method syntax

Example:

lazyResult := df.Lazy().
	Filter(gorilla.Col("age").Gt(gorilla.Lit(25))).
	WithColumn("bonus", gorilla.Col("salary").Mul(gorilla.Lit(0.1))).
	GroupBy("department").
	Agg(gorilla.Sum(gorilla.Col("salary")).As("total_salary")).
	Select("department", "total_salary")

// Execute the entire plan efficiently
result, err := lazyResult.Collect()
defer result.Release()

func (*LazyFrame) Collect

func (lf *LazyFrame) Collect() (*DataFrame, error)

Collect executes all pending operations and returns the final DataFrame.

func (*LazyFrame) Filter

func (lf *LazyFrame) Filter(predicate expr.Expr) *LazyFrame

Filter adds a filter operation to the LazyFrame.

The predicate can be any expression that implements expr.Expr interface.

func (*LazyFrame) GroupBy

func (lf *LazyFrame) GroupBy(columns ...string) *LazyGroupBy

GroupBy adds a group by operation to the LazyFrame.

func (*LazyFrame) Join

func (lf *LazyFrame) Join(right *LazyFrame, options *JoinOptions) *LazyFrame

Join adds a join operation to the LazyFrame.

func (*LazyFrame) Release

func (lf *LazyFrame) Release()

Release frees the memory used by the LazyFrame.

func (*LazyFrame) Select

func (lf *LazyFrame) Select(columns ...string) *LazyFrame

Select adds a select operation to the LazyFrame.

func (*LazyFrame) Sort

func (lf *LazyFrame) Sort(column string, ascending bool) *LazyFrame

Sort adds a sort operation to the LazyFrame.

func (*LazyFrame) SortBy

func (lf *LazyFrame) SortBy(columns []string, ascending []bool) *LazyFrame

SortBy adds a multi-column sort operation to the LazyFrame.

func (*LazyFrame) String

func (lf *LazyFrame) String() string

String returns a string representation of the LazyFrame operations.

func (*LazyFrame) WithColumn

func (lf *LazyFrame) WithColumn(name string, expression expr.Expr) *LazyFrame

WithColumn adds a new column to the LazyFrame.

The expression can be any expression that implements expr.Expr interface.

type LazyGroupBy

type LazyGroupBy struct {
	// contains filtered or unexported fields
}

LazyGroupBy is the public type for lazy group by operations.

func (*LazyGroupBy) Agg

func (lgb *LazyGroupBy) Agg(aggregations ...*expr.AggregationExpr) *LazyFrame

Agg adds aggregation operations to the LazyGroupBy.

Takes aggregation expressions directly without wrapper overhead.

func (*LazyGroupBy) Count

func (lgb *LazyGroupBy) Count(column string) *LazyFrame

Count adds a count aggregation for the specified column.

func (*LazyGroupBy) Max

func (lgb *LazyGroupBy) Max(column string) *LazyFrame

Max adds a max aggregation for the specified column.

func (*LazyGroupBy) Mean

func (lgb *LazyGroupBy) Mean(column string) *LazyFrame

Mean adds a mean aggregation for the specified column.

func (*LazyGroupBy) Min

func (lgb *LazyGroupBy) Min(column string) *LazyFrame

Min adds a min aggregation for the specified column.

func (*LazyGroupBy) Sum

func (lgb *LazyGroupBy) Sum(column string) *LazyFrame

Sum adds a sum aggregation for the specified column.

type MemoryAwareChunkReader

type MemoryAwareChunkReader struct {
	// contains filtered or unexported fields
}

MemoryAwareChunkReader wraps a ChunkReader with memory monitoring

func NewMemoryAwareChunkReader

func NewMemoryAwareChunkReader(reader ChunkReader, monitor *MemoryUsageMonitor) *MemoryAwareChunkReader

NewMemoryAwareChunkReader creates a new memory-aware chunk reader

func (*MemoryAwareChunkReader) Close

func (mr *MemoryAwareChunkReader) Close() error

Close closes the underlying reader

func (*MemoryAwareChunkReader) HasNext

func (mr *MemoryAwareChunkReader) HasNext() bool

HasNext returns true if there are more chunks to read

func (*MemoryAwareChunkReader) ReadChunk

func (mr *MemoryAwareChunkReader) ReadChunk() (*DataFrame, error)

ReadChunk reads the next chunk and records memory usage

type MemoryManager

type MemoryManager struct {
	// contains filtered or unexported fields
}

MemoryManager helps track and release multiple resources automatically.

MemoryManager is useful for complex scenarios where many short-lived resources are created and need bulk cleanup. For most use cases, prefer the defer pattern with individual Release() calls for better readability.

Use MemoryManager when:

  • Creating many temporary resources in loops
  • Complex operations with unpredictable resource lifetimes
  • Bulk operations where individual defer statements are impractical

The MemoryManager is safe for concurrent use from multiple goroutines.

Example:

err := gorilla.WithMemoryManager(mem, func(manager *gorilla.MemoryManager) error {
	for i := 0; i < 1000; i++ {
		temp := createTempDataFrame(i)
		manager.Track(temp) // Will be released automatically
	}
	return processData()
})
// All tracked resources are released here

func NewMemoryManager

func NewMemoryManager(allocator memory.Allocator) *MemoryManager

NewMemoryManager creates a new memory manager with the given allocator

func (*MemoryManager) Count

func (m *MemoryManager) Count() int

Count returns the number of tracked resources

func (*MemoryManager) ReleaseAll

func (m *MemoryManager) ReleaseAll()

ReleaseAll releases all tracked resources and clears the tracking list

func (*MemoryManager) Track

func (m *MemoryManager) Track(resource Releasable)

Track adds a resource to be managed and automatically released

type MemoryStats

type MemoryStats struct {
	// Total allocated memory in bytes
	AllocatedBytes int64
	// Peak allocated memory in bytes
	PeakAllocatedBytes int64
	// Number of active allocations
	ActiveAllocations int64
	// Number of garbage collections triggered
	GCCount int64
	// Last garbage collection time
	LastGCTime time.Time
	// Memory pressure level (0.0 to 1.0)
	MemoryPressure float64
}

MemoryStats provides detailed memory usage statistics

type MemoryUsageMonitor

type MemoryUsageMonitor struct {
	// contains filtered or unexported fields
}

MemoryUsageMonitor provides real-time memory usage monitoring and automatic spilling

func NewMemoryUsageMonitor

func NewMemoryUsageMonitor(spillThreshold int64) *MemoryUsageMonitor

NewMemoryUsageMonitor creates a new memory usage monitor with the specified spill threshold

func (*MemoryUsageMonitor) CurrentUsage

func (m *MemoryUsageMonitor) CurrentUsage() int64

CurrentUsage returns the current memory usage in bytes

func (*MemoryUsageMonitor) GetStats

func (m *MemoryUsageMonitor) GetStats() MemoryStats

GetStats returns current memory usage statistics

func (*MemoryUsageMonitor) PeakUsage

func (m *MemoryUsageMonitor) PeakUsage() int64

PeakUsage returns the peak memory usage in bytes

func (*MemoryUsageMonitor) RecordAllocation

func (m *MemoryUsageMonitor) RecordAllocation(bytes int64)

RecordAllocation records a new memory allocation

func (*MemoryUsageMonitor) RecordDeallocation

func (m *MemoryUsageMonitor) RecordDeallocation(bytes int64)

RecordDeallocation records a memory deallocation

func (*MemoryUsageMonitor) SetCleanupCallback

func (m *MemoryUsageMonitor) SetCleanupCallback(callback func() error)

SetCleanupCallback sets the callback function to be called for cleanup operations

func (*MemoryUsageMonitor) SetSpillCallback

func (m *MemoryUsageMonitor) SetSpillCallback(callback func() error)

SetSpillCallback sets the callback function to be called when memory usage exceeds threshold

func (*MemoryUsageMonitor) SpillCount

func (m *MemoryUsageMonitor) SpillCount() int64

SpillCount returns the number of times spilling was triggered

func (*MemoryUsageMonitor) StartMonitoring

func (m *MemoryUsageMonitor) StartMonitoring()

StartMonitoring starts the background memory monitoring goroutine

func (*MemoryUsageMonitor) StopMonitoring

func (m *MemoryUsageMonitor) StopMonitoring()

StopMonitoring stops the background memory monitoring goroutine

type Releasable

type Releasable interface {
	Release()
}

Releasable represents any resource that can be released to free memory.

This interface is implemented by DataFrames, Series, and other resources that use Apache Arrow memory management. Always call Release() when done with a resource to prevent memory leaks.

The recommended pattern is to use defer for automatic cleanup:

df := gorilla.NewDataFrame(series1, series2)
defer df.Release() // Automatic cleanup

type SpillableBatch

type SpillableBatch struct {
	// contains filtered or unexported fields
}

SpillableBatch represents a batch of data that can be spilled to disk

func NewSpillableBatch

func NewSpillableBatch(data *DataFrame) *SpillableBatch

NewSpillableBatch creates a new spillable batch

func (*SpillableBatch) GetData

func (sb *SpillableBatch) GetData() (*DataFrame, error)

GetData returns the data, loading from spill if necessary. If the data has been spilled to disk, this method attempts to reload it. Returns an error if the data cannot be loaded from spill storage.

func (*SpillableBatch) Release

func (sb *SpillableBatch) Release()

Release releases the batch resources

func (*SpillableBatch) Spill

func (sb *SpillableBatch) Spill() error

Spill spills the batch to disk and releases memory

type StreamingOperation

type StreamingOperation interface {
	// Apply applies the operation to a data chunk
	Apply(*DataFrame) (*DataFrame, error)
	// Release releases any resources held by the operation
	Release()
}

StreamingOperation represents an operation that can be applied to data chunks

type StreamingProcessor

type StreamingProcessor struct {
	// contains filtered or unexported fields
}

StreamingProcessor handles processing of datasets larger than memory

func NewStreamingProcessor

func NewStreamingProcessor(chunkSize int, allocator memory.Allocator, monitor *MemoryUsageMonitor) *StreamingProcessor

NewStreamingProcessor creates a new streaming processor with specified chunk size

func (*StreamingProcessor) Close

func (sp *StreamingProcessor) Close() error

Close closes the streaming processor and releases resources

func (*StreamingProcessor) ProcessStreaming

func (sp *StreamingProcessor) ProcessStreaming(
	reader ChunkReader, writer ChunkWriter, operations []StreamingOperation,
) error

ProcessStreaming processes data in streaming fashion using chunks

Directories

Path Synopsis
cmd
gorilla-cli command
config command
Package main demonstrates how to use Gorilla's configurable processing parameters
Package main demonstrates how to use Gorilla's configurable processing parameters
debug command
io command
internal
config
Package config provides configuration management for Gorilla DataFrame operations
Package config provides configuration management for Gorilla DataFrame operations
dataframe
Package dataframe provides high-performance DataFrame operations
Package dataframe provides high-performance DataFrame operations
errors
Package errors provides standardized error types for DataFrame operations.
Package errors provides standardized error types for DataFrame operations.
expr
Package expr provides expression evaluation for DataFrame operations
Package expr provides expression evaluation for DataFrame operations
io
Package io provides data input/output operations for DataFrames.
Package io provides data input/output operations for DataFrames.
parallel
Package parallel provides concurrent processing utilities
Package parallel provides concurrent processing utilities
series
Package series provides data structures for column operations
Package series provides data structures for column operations
validation
Package validation provides input validation utilities for DataFrame operations.
Package validation provides input validation utilities for DataFrame operations.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL