Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Scanner ¶
type Scanner struct {
// contains filtered or unexported fields
}
Scanner provides reflection-based row scanning into Go structs via the lake-orm tag trio (`lake`, `lakeorm`, `spark`). It is the Driver- agnostic counterpart of what every Driver's underlying protocol produces (Spark Connect Rows, DuckDB vectors, future Arrow Flight batches) — each Driver converts its native rows into lakeorm.Row and the Scanner takes over.
Two design decisions worth calling out:
- sqlx/reflectx for field resolution (embedded structs, dot-notation, pointer-to-struct auto-init). Rolling our own would diverge from every other sqlx-using Go service on edge cases.
- scannerTarget (defined below) for nullable custom types that implement sql.Scanner — SortableID, Location, etc. The wrapper tracks NULL separately from the underlying Scan so pointer fields can be set to nil rather than zero-valued.
sqlx/reflectx binds one tag key per mapper, so we carry three in priority order (lake > lakeorm > spark) and consult them in turn when resolving a column. Same precedence as tag.go's effectiveTag.
func NewScanner ¶
func NewScanner() *Scanner
NewScanner creates a Scanner backed by sqlx's reflectx field mapper. One mapper per accepted tag key — lookups walk them in priority order.
func (*Scanner) ScanRow ¶
ScanRow scans one row's columns + values into dest (a *T). Pure data in, struct out — no dependency on any Row-shaped interface, so callers hand in whatever they already have from their native row source.