bigquery

package
v0.21.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 20, 2026 License: MIT Imports: 12 Imported by: 0

Documentation

Overview

Package bigquery provides implementation of BigQuery data warehouse

Package bigquery provides implementation of BigQuery data warehouse

Index

Constants

View Source
const MapperName = "bigquery"

MapperName is the name of the BigQuery type mapper

Variables

This section is empty.

Functions

func NewBigQueryTableDriver

func NewBigQueryTableDriver(
	db *bigquery.Client,
	dataset string,
	writer Writer,
	opts ...BigQueryTableDriverOption,
) warehouse.Driver

NewBigQueryTableDriver creates a new BigQuery table driver.

func NewFieldTypeMapper

func NewFieldTypeMapper() warehouse.FieldTypeMapper[SpecificBigQueryType]

NewFieldTypeMapper creates a mapper that supports BigQuery types

Types

type BigQueryTableDriverOption added in v0.21.0

type BigQueryTableDriverOption func(*bigQueryTableDriver)

BigQueryTableDriverOption configures a BigQuery table driver.

func WithPartitionBy added in v0.21.0

func WithPartitionBy(cfg PartitioningConfig) BigQueryTableDriverOption

WithPartitionBy enables partitioning with the provided configuration. Panics if cfg.Field is empty.

func WithPartitionByField added in v0.21.0

func WithPartitionByField(field string) BigQueryTableDriverOption

WithPartitionByField enables partitioning by the specified field with default DAY interval and no expiration. Panics if field is empty.

func WithQueryTimeout added in v0.21.0

func WithQueryTimeout(timeout time.Duration) BigQueryTableDriverOption

WithQueryTimeout sets the timeout used for BigQuery metadata operations performed by the driver.

func WithTableCreationTimeout added in v0.21.0

func WithTableCreationTimeout(timeout time.Duration) BigQueryTableDriverOption

WithTableCreationTimeout sets how long the driver waits for a created table to become queryable.

type PartitionInterval added in v0.21.0

type PartitionInterval string

PartitionInterval represents the time-based partition interval type for BigQuery tables.

const (
	// PartitionIntervalHour partitions by hour.
	PartitionIntervalHour PartitionInterval = "HOUR"
	// PartitionIntervalDay partitions by day (default).
	PartitionIntervalDay PartitionInterval = "DAY"
	// PartitionIntervalMonth partitions by month.
	PartitionIntervalMonth PartitionInterval = "MONTH"
	// PartitionIntervalYear partitions by year.
	PartitionIntervalYear PartitionInterval = "YEAR"
)

type PartitioningConfig added in v0.21.0

type PartitioningConfig struct {
	// Interval is the partition interval type (HOUR, DAY, MONTH, YEAR).
	Interval PartitionInterval
	// Field is the name of the field to partition by. Must be a top-level TIMESTAMP or DATE field.
	// If empty, partitioning uses the pseudo column _PARTITIONTIME.
	Field string
	// ExpirationDays is the number of days to keep the storage for a partition.
	// If 0, the data in the partitions do not expire.
	ExpirationDays int
}

PartitioningConfig represents the configuration for BigQuery table partitioning.

type SpecificBigQueryType

type SpecificBigQueryType struct {
	FieldType  bigquery.FieldType
	Required   bool
	Repeated   bool
	FormatFunc func(SpecificBigQueryType) func(i any, m arrow.Metadata) (any, error)
	// Schema holds the nested schema for RECORD types, nil for primitive types
	Schema *bigquery.Schema
}

SpecificBigQueryType represents a BigQuery data type with its string representation and formatting function

func (SpecificBigQueryType) Format

func (t SpecificBigQueryType) Format(i any, m arrow.Metadata) (any, error)

Format formats a value according to the BigQuery type's formatting function

type Writer

type Writer interface {
	Write(ctx context.Context, tableName string, schema *arrow.Schema, rows []map[string]any) error
}

Writer is an interface that represents a writer for BigQuery

func NewLoadJobWriter

func NewLoadJobWriter(
	db *bigquery.Client,
	dataset string,
	queryTimeout time.Duration,
	fieldTypeMapper warehouse.FieldTypeMapper[SpecificBigQueryType],
) Writer

NewLoadJobWriter creates a new free tier compatible writer, using load jobs with NDJSON

func NewStreamingWriter

func NewStreamingWriter(
	db *bigquery.Client,
	dataset string,
	queryTimeout time.Duration,
	fieldTypeMapper warehouse.FieldTypeMapper[SpecificBigQueryType],
) Writer

NewStreamingWriter creates a new non-free tier writer, using streaming insert

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL