Documentation
¶
Index ¶
- func AppendToDefaultStream(w io.Writer, projectID, datasetID, tableID string) error
- func AppendToPendingStream(w io.Writer, projectID, datasetID, tableID string) error
- func BuildAppendRowsRequest(data [][]byte) *storagepb.AppendRowsRequest
- func GoTypeToArrowType(goType reflect.Type) arrow.DataType
- func NormalizeDescriptor(in protoreflect.MessageDescriptor) (*descriptorpb.DescriptorProto, error)
- func StorageSchemaToProto2Descriptor(inSchema *storagepb.TableSchema, scope string) (protoreflect.Descriptor, error)
- func StorageSchemaToProto3Descriptor(inSchema *storagepb.TableSchema, scope string) (protoreflect.Descriptor, error)
- func UniqueBQName(prefix string) (string, error)
- func UniqueBucketName(prefix, projectID string) (string, error)
- type ParquetRows
- func (p *ParquetRows) Close() error
- func (p *ParquetRows) ColumnTypeDatabaseTypeName(index int) string
- func (p *ParquetRows) ColumnTypeNullable(index int) (nullable, ok bool)
- func (p *ParquetRows) ColumnTypePrecisionScale(index int) (precision, scale int64, ok bool)
- func (p *ParquetRows) ColumnTypeScanType(index int) reflect.Type
- func (p *ParquetRows) Columns() []string
- func (p *ParquetRows) Next(dest []driver.Value) error
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func AppendToDefaultStream ¶
appendToDefaultStream demonstrates using the managedwriter package to write some example data to a default stream.
func AppendToPendingStream ¶
func BuildAppendRowsRequest ¶
func BuildAppendRowsRequest(data [][]byte) *storagepb.AppendRowsRequest
func NormalizeDescriptor ¶
func NormalizeDescriptor(in protoreflect.MessageDescriptor) (*descriptorpb.DescriptorProto, error)
NormalizeDescriptor builds a self-contained DescriptorProto suitable for communicating schema information with the BigQuery Storage write API. It's primarily used for cases where users are interested in sending data using a predefined protocol buffer message.
The storage API accepts a single DescriptorProto for decoding message data. In many cases, a message is comprised of multiple independent messages, from the same .proto file or from multiple sources. Rather than being forced to communicate all these messages independently, what this method does is rewrite the DescriptorProto to inline all messages as nested submessages. As the backend only cares about the types and not the namespaces when decoding, this is sufficient for the needs of the API's representation.
In addition to nesting messages, this method also handles some encapsulation of enum types to avoid possible conflicts due to ambiguities, and clears oneof indices as oneof isn't a concept that maps into BigQuery schemas.
To enable proto3 usage, this function will also rewrite proto3 descriptors into equivalent proto2 form. Such rewrites include setting the appropriate default values for proto3 fields.
func StorageSchemaToProto2Descriptor ¶
func StorageSchemaToProto2Descriptor(inSchema *storagepb.TableSchema, scope string) (protoreflect.Descriptor, error)
StorageSchemaToProto2Descriptor builds a protoreflect.Descriptor for a given table schema using proto2 syntax.
func StorageSchemaToProto3Descriptor ¶
func StorageSchemaToProto3Descriptor(inSchema *storagepb.TableSchema, scope string) (protoreflect.Descriptor, error)
StorageSchemaToProto3Descriptor builds a protoreflect.Descriptor for a given table schema using proto3 syntax.
NOTE: Currently the write API doesn't yet support proto3 behaviors (default value, wrapper types, etc), but this is provided for completeness.
func UniqueBQName ¶
UniqueBQName returns a more unique name for a BigQuery resource.
func UniqueBucketName ¶
UniqueBucketName returns a more unique name cloud storage bucket.
Types ¶
type ParquetRows ¶
type ParquetRows struct {
// contains filtered or unexported fields
}
ParquetRows represents a result set that reads from a Parquet file using Apache Arrow.
func NewParquetRowsReader ¶
func NewParquetRowsReader(ctx context.Context, filePath string) (*ParquetRows, error)
NewParquetReader initializes a new ParquetRows reader with the provided options.
func (*ParquetRows) Close ¶
func (p *ParquetRows) Close() error
Close releases all resources associated with the reader.
func (*ParquetRows) ColumnTypeDatabaseTypeName ¶
func (p *ParquetRows) ColumnTypeDatabaseTypeName(index int) string
ColumnTypeDatabaseTypeName returns the database type name of the column at the specified index.
func (*ParquetRows) ColumnTypeNullable ¶
func (p *ParquetRows) ColumnTypeNullable(index int) (nullable, ok bool)
ColumnTypeNullable returns whether the column at the specified index is nullable.
func (*ParquetRows) ColumnTypePrecisionScale ¶
func (p *ParquetRows) ColumnTypePrecisionScale(index int) (precision, scale int64, ok bool)
ColumnTypePrecisionScale returns the precision and scale for the column at the specified index.
func (*ParquetRows) ColumnTypeScanType ¶
func (p *ParquetRows) ColumnTypeScanType(index int) reflect.Type
func (*ParquetRows) Columns ¶
func (p *ParquetRows) Columns() []string
Columns returns the column names of the Parquet file.