Documentation
¶
Index ¶
- Constants
- func CancelAsyncQuery(db *sql.DB, token rows.AsyncResult) error
- func ExecAsync(db *sql.DB, query string, args ...any) (rows.AsyncResult, error)
- func ExecAsyncContext(ctx context.Context, db *sql.DB, query string, args ...any) (rows.AsyncResult, error)
- func IsAsyncQueryRunning(db *sql.DB, token rows.AsyncResult) (bool, error)
- func IsAsyncQuerySuccessful(db *sql.DB, token rows.AsyncResult) (bool, error)
- func NoError(option driverOption) driverOptionWithError
- func ParseDSNString(dsn string) (*types.FireboltSettings, error)
- func WithAccountID(accountID string) driverOption
- func WithAccountName(accountName string) driverOptionWithError
- func WithClientParams(accountID string, token string, userAgent string) driverOption
- func WithDatabaseAndEngineName(databaseName, engineName string) driverOptionWithError
- func WithDatabaseName(databaseName string) driverOption
- func WithDefaultQueryParams(params map[string]string) driverOption
- func WithEngineUrl(engineUrl string) driverOption
- func WithToken(token string) driverOption
- func WithTransport(transport http.RoundTripper) driverOption
- func WithUserAgent(userAgent string) driverOption
- type Batch
- type BatchColumn
- type BatchConnection
- type BatchMetric
- type BatchOption
- type CompressionCodec
- type DescribeConnection
- type FireboltConnector
- type FireboltDriver
- type QueryStatus
- type QueryStatusResponse
- type SerializationFormat
Constants ¶
const DefaultBufferSize int64 = 16384
DefaultBufferSize is the default number of rows buffered before the serialiser flushes to the underlying writer, enabling true streaming: compressed data flows to the HTTP transport incrementally instead of buffering the entire file in memory. Override via WithBufferSize.
Variables ¶
This section is empty.
Functions ¶
func CancelAsyncQuery ¶ added in v1.11.0
func CancelAsyncQuery(db *sql.DB, token rows.AsyncResult) error
func ExecAsyncContext ¶ added in v1.11.0
func IsAsyncQueryRunning ¶ added in v1.11.0
func IsAsyncQuerySuccessful ¶ added in v1.11.0
func ParseDSNString ¶
func ParseDSNString(dsn string) (*types.FireboltSettings, error)
ParseDSNString parses a dsn in a format: firebolt://username:password@db_name[/engine_name][?account_name=organization] returns a settings object where all parsed values are populated returns an error if required fields couldn't be parsed or if after parsing some characters were left unparsed
func WithAccountID ¶ added in v1.6.1
func WithAccountID(accountID string) driverOption
WithAccountID defines account ID for the driver
func WithAccountName ¶ added in v1.8.0
func WithAccountName(accountName string) driverOptionWithError
WithAccountName defines account name for the driver
func WithClientParams ¶ added in v1.1.0
WithClientParams defines client parameters for the driver
func WithDatabaseAndEngineName ¶ added in v1.8.0
func WithDatabaseAndEngineName(databaseName, engineName string) driverOptionWithError
WithDatabaseAndEngineName defines database name and engine name for the driver
func WithDatabaseName ¶ added in v1.1.0
func WithDatabaseName(databaseName string) driverOption
WithDatabaseName defines database name for the driver
func WithDefaultQueryParams ¶ added in v1.14.0
WithDefaultQueryParams defines default query parameters that will be seeded into the connection These parameters will be included in all HTTP requests and can be overridden by SET statements
func WithEngineUrl ¶ added in v1.1.0
func WithEngineUrl(engineUrl string) driverOption
WithEngineUrl defines engine url for the driver
func WithToken ¶ added in v1.6.1
func WithToken(token string) driverOption
WithToken defines token for the driver
func WithTransport ¶ added in v1.18.0
func WithTransport(transport http.RoundTripper) driverOption
WithTransport sets a custom http.RoundTripper for all HTTP requests made by the SDK (queries, batch uploads, authentication). Use client.DefaultTransport() as a starting point and override specific fields, or wrap it with middleware (e.g. otelhttp for tracing):
transport := client.DefaultTransport()
transport.DialContext = (&net.Dialer{Timeout: 60*time.Second}).DialContext
connector, _ := firebolt.OpenConnectorWithDSN(dsn, firebolt.WithTransport(transport))
db := sql.OpenDB(connector)
func WithUserAgent ¶ added in v1.6.1
func WithUserAgent(userAgent string) driverOption
WithUserAgent defines user agent for the driver
Types ¶
type Batch ¶ added in v1.16.0
type Batch interface {
// Append buffers a single row. The number of arguments must match the
// column count, and each value must be convertible to the column's type.
Append(v ...interface{}) error
// Column returns a handle for columnar appends to the column at the
// given index. The returned BatchColumn is valid for the lifetime of
// the batch.
Column(index int) BatchColumn
// Send serialises all buffered rows and uploads them to the engine.
// The batch is reset after a successful send and can be reused.
Send(ctx context.Context) error
// Abort discards all buffered rows without sending.
Abort() error
// GetMetrics returns timing metrics for each Send() call made on this
// batch (one entry per call, in chronological order). Returns an error
// if metrics collection was not enabled via WithBatchMetrics.
GetMetrics() ([]BatchMetric, error)
}
Batch represents an in-progress batch insert operation. Rows are buffered client-side and serialised when Send is called. The serialised payload is uploaded to the engine via a multipart form POST.
Two insertion modes are supported:
Row-wise — call Append once per row:
batch.Append(col1Val, col2Val, col3Val)
Columnar — obtain a column handle and append an entire typed slice at once:
batch.Column(0).Append([]int32{1, 2, 3})
batch.Column(1).Append([]string{"a", "b", "c"})
Both modes can be mixed freely; the only requirement is that all columns have the same number of rows by the time Send is called.
type BatchColumn ¶ added in v1.16.0
type BatchColumn interface {
// Append appends all values in the given slice to this column.
// The slice element type must be compatible with the column's Firebolt
// type (e.g. []int32 for an "int" column, []string for "text").
Append(v interface{}) error
}
BatchColumn is returned by Batch.Column and supports appending an entire typed slice of values to a single column (columnar insertion).
type BatchConnection ¶ added in v1.16.0
type BatchConnection interface {
PrepareBatch(ctx context.Context, query string, opts ...BatchOption) (Batch, error)
}
BatchConnection provides access to batch insert functionality. Obtain it via database/sql (*sql.Conn).Raw:
conn.Raw(func(driverConn interface{}) error {
batch, err := driverConn.(fireboltgosdk.BatchConnection).PrepareBatch(
ctx, "INSERT INTO my_table (col1, col2, col3)")
if err != nil { return err }
for _, row := range rows {
if err := batch.Append(row.Col1, row.Col2, row.Col3); err != nil {
return err
}
}
return batch.Send(ctx)
})
type BatchMetric ¶ added in v1.20.0
type BatchMetric struct {
SerializeStart time.Time
SerializeSeconds float64
UploadStart time.Time
UploadSeconds float64
}
BatchMetric records timing for one Send() call, split into the serialisation phase and the network upload phase.
type BatchOption ¶ added in v1.20.0
type BatchOption func(*batchConfig)
BatchOption configures batch behaviour. Pass to PrepareBatch.
func WithBatchMetrics ¶ added in v1.20.1
func WithBatchMetrics() BatchOption
WithBatchMetrics enables per-Send() timing metrics collection. When enabled, GetMetrics returns the recorded metrics. When disabled (the default), GetMetrics returns an error.
func WithBufferSize ¶ added in v1.20.0
func WithBufferSize(n int64) BatchOption
WithBufferSize sets the maximum number of rows buffered before the serialiser flushes to the underlying writer. Smaller values produce more incremental streaming (less peak memory) at a small metadata cost. n must be positive; passing n <= 0 causes PrepareBatch to return an error.
The default is DefaultBufferSize (16 384).
func WithCompression ¶ added in v1.20.1
func WithCompression(c CompressionCodec) BatchOption
WithCompression selects the compression codec used inside the serialised file. For Parquet this controls page-level compression. The default is CompressSnappy.
func WithCompressionLevel ¶ added in v1.20.1
func WithCompressionLevel(level int) BatchOption
WithCompressionLevel sets the compression level passed to the underlying codec. The meaning is codec-specific:
- Gzip / Deflate: 0 (no compression) – 9 (best), as defined by compress/flate.
- Zstd: encoder level (e.g. 1 = fastest, 3 = default, 11 = best).
- LZ4: 1–9 (Parquet only).
- Brotli: quality 0–11 (Parquet only).
- Snappy / Uncompressed: ignored (no tuneable level).
When this option is not used, each codec applies its own built-in default.
func WithQueryLabel ¶ added in v1.20.1
func WithQueryLabel(label string) BatchOption
WithQueryLabel sets the query label sent with the batch upload request. This is safe to use when multiple batches share the same connection, as it is stored per-batch rather than mutating shared connection state.
func WithSerialization ¶ added in v1.20.1
func WithSerialization(f SerializationFormat) BatchOption
WithSerialization selects the wire format for batch uploads. The default is FormatParquet.
type CompressionCodec ¶ added in v1.20.1
type CompressionCodec int
CompressionCodec selects the compression algorithm applied within the serialised file (e.g. Parquet page compression).
const ( // CompressSnappy uses Snappy compression. This is the default. CompressSnappy CompressionCodec = iota // CompressZstd uses Zstandard compression. CompressZstd // CompressGzip uses gzip compression. CompressGzip // CompressUncompressed disables compression entirely. CompressUncompressed // CompressLZ4 uses LZ4 compression. CompressLZ4 // CompressBrotli uses Brotli compression. CompressBrotli )
type DescribeConnection ¶ added in v1.13.0
type DescribeConnection interface {
Describe(ctx context.Context, query string, args ...interface{}) (*types.DescribeResult, error)
}
DescribeConnection interface provides access to query description functionality
type FireboltConnector ¶ added in v1.1.0
type FireboltConnector struct {
// contains filtered or unexported fields
}
FireboltConnector is an intermediate type between a Connection and a Driver which stores session data
func FireboltConnectorWithOptions ¶ added in v1.1.0
func FireboltConnectorWithOptions(opts ...driverOption) *FireboltConnector
FireboltConnectorWithOptions builds a custom connector
func FireboltConnectorWithOptionsWithErrors ¶ added in v1.8.0
func FireboltConnectorWithOptionsWithErrors(opts ...driverOptionWithError) (*FireboltConnector, error)
FireboltConnectorWithOptionsWithErrors builds a custom connector with error handling
func OpenConnectorWithDSN ¶ added in v1.18.0
func OpenConnectorWithDSN(dsn string, opts ...driverOption) (*FireboltConnector, error)
OpenConnectorWithDSN parses a DSN string and applies the given driver options (e.g. WithTransport), returning a connector suitable for sql.OpenDB. This is the recommended way to customize transport settings that cannot be expressed in a DSN string:
transport := client.DefaultTransport() transport.IdleConnTimeout = 2 * time.Minute connector, err := firebolt.OpenConnectorWithDSN(dsn, firebolt.WithTransport(transport)) db := sql.OpenDB(connector)
func (*FireboltConnector) Driver ¶ added in v1.1.0
func (c *FireboltConnector) Driver() driver.Driver
Driver returns the underlying driver of the Connector
type FireboltDriver ¶
type FireboltDriver struct {
// contains filtered or unexported fields
}
func (*FireboltDriver) Open ¶
func (d *FireboltDriver) Open(dsn string) (driver.Conn, error)
Open parses the dsn string, and if correct tries to establish a connection
func (*FireboltDriver) OpenConnector ¶ added in v1.1.0
func (d *FireboltDriver) OpenConnector(dsn string) (driver.Connector, error)
type QueryStatus ¶ added in v1.11.0
type QueryStatus string
const ( QueryStatusRunning QueryStatus = "RUNNING" QueryStatusSuccessful QueryStatus = "ENDED_SUCCESSFULLY" QueryStatusCanceled QueryStatus = "CANCELED_EXECUTION" )
type QueryStatusResponse ¶ added in v1.11.0
type QueryStatusResponse struct {
// contains filtered or unexported fields
}
type SerializationFormat ¶ added in v1.20.1
type SerializationFormat int
SerializationFormat selects the wire format used to encode batch data.
const ( // FormatParquet uses Parquet (columnar). // This is the default when no format is specified. FormatParquet SerializationFormat = iota )