conn

package
v0.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 24, 2026 License: Apache-2.0 Imports: 13 Imported by: 0

Documentation

Overview

Package conn manages pgwire connections to CockroachDB clusters.

The Manager holds a DSN and establishes a connection lazily on the first call that requires cluster access (currently Ping). It is the single point of contact between crdb-sql and the cluster; all SQL execution flows through it, and it enforces the invariant that credentials are never included in error messages or log output.

Lifecycle: callers create a Manager with NewManager, invoke methods that may trigger a lazy connect, and defer Close. The Manager is not safe for concurrent use; the CLI creates one per command invocation.

Index

Constants

View Source
const DefaultStatementTimeout = 30 * time.Second

DefaultStatementTimeout is the per-call statement_timeout applied inside the transaction wrapper used by Explain, ExplainDDL, and Execute when the caller does not override it via WithStatementTimeout. 30s is generous enough for EXPLAIN and EXPLAIN (DDL, SHAPE) on large schemas, and for typical interactive queries via exec, while still preventing a runaway statement from hanging an agent indefinitely.

Variables

View Source
var ErrAmbiguousTable = errors.New("table name is ambiguous across schemas")

ErrAmbiguousTable is returned by DescribeTableFromCluster when an unqualified table name resolves in more than one non-system schema. Callers should render the candidate list (carried in AmbiguousTableError) and prompt the user to qualify with schema.table.

View Source
var ErrTableNotFound = errors.New("table not found")

ErrTableNotFound is returned by DescribeTableFromCluster when no non-system schema in the connection's current database holds a table with the requested name. CLI and MCP layers map it to the existing "table %q not found" diagnostic so the live and schema-file paths surface the same shape.

Functions

func MergeTLSParams

func MergeTLSParams(dsn string, p TLSParams) (string, error)

MergeTLSParams returns dsn with non-empty TLSParams fields applied as URI query parameters. Returns dsn unchanged when p is zero.

Pairing policy: SSLCert and SSLKey must be supplied together. A half-pair (only one of the two) is a libpq misconfiguration that otherwise surfaces as an opaque pgx connect error several frames later, so the merge fails up front instead.

Conflict policy: it is an error for the same parameter to be supplied via both the DSN's query string and TLSParams when the DSN's value is non-empty, so a typo cannot silently win. An empty DSN-side value (e.g. `?sslmode=`) is treated as absent.

Form policy: when any TLS field is set, the DSN must be in URI form (`postgres://` or `postgresql://`). pgx also accepts the keyword/value form ("host=... user=..."), but splicing flag-sourced URI params into it would require non-trivial reconstruction; the keyword form is rejected with a clear error so the user can switch forms or move the TLS knobs into the DSN itself.

Types

type AmbiguousTableError

type AmbiguousTableError struct {
	TableName string
	Schemas   []string
}

AmbiguousTableError wraps ErrAmbiguousTable with the candidate schema list so callers can render a helpful "exists in: a, b" hint without re-querying. errors.Is(err, ErrAmbiguousTable) holds.

func (*AmbiguousTableError) Error

func (e *AmbiguousTableError) Error() string

func (*AmbiguousTableError) Unwrap

func (e *AmbiguousTableError) Unwrap() error

type ClusterInfo

type ClusterInfo struct {
	ClusterID string `json:"cluster_id"`
	Version   string `json:"version"`
}

ClusterInfo holds the metadata returned by a successful Ping.

type ColumnMeta

type ColumnMeta struct {
	Name string `json:"name"`
	Type string `json:"type,omitempty"`
}

ColumnMeta names a single column in an Execute result. Type is the pgtype.Type.Name reported by the driver — best-effort, because the pgx type map does not have a registered name for every CRDB-specific OID. When the lookup misses, Type is left empty rather than guessed.

type DDLExplainResult

type DDLExplainResult struct {
	Statement  string         `json:"statement"`
	Operations []DDLOperation `json:"operations"`
	RawText    string         `json:"raw_text"`
}

DDLExplainResult is the structured form of a `EXPLAIN (DDL, SHAPE)` result.

Statement is the SQL statement that the schema changer would execute, extracted from the leading "Schema change plan for <stmt>;" header. Operations is the parsed flat list of operations in execution order. RawText is the original multi-line `info` string the cluster returned, retained verbatim so the CLI text mode can render output exactly as `cockroach sql` would and so agents can re-parse if they need detail the structured form drops (e.g. tree connector positions).

Like ExplainResult, DDLExplainResult is only constructed on the success path; any failure (query, scan, parse) returns the zero value plus an error.

type DDLOperation

type DDLOperation struct {
	Op      string   `json:"op"`
	Targets []string `json:"targets,omitempty"`
}

DDLOperation is one operation in a CockroachDB declarative schema changer plan, as rendered by `EXPLAIN (DDL, SHAPE)`. Op is the operation description with the leading tree connector stripped (e.g. "execute 4 system table mutations transactions", "backfill using primary index users_pkey- in relation users"). Targets holds the indented sub-lines that follow the operation, describing the index or constraint the operation acts on (e.g. "into users_pkey+ (id; name)", "from users@[5] into users_pkey+"). Most operations have no targets; the slice is nil in that case.

CRDB conventions in these strings: a trailing `-` on an index name (e.g. `users_pkey-`) marks the version being torn down; a trailing `+` marks the version being built up. `t@[N]` references a table's internal index by ID. These are passed through verbatim because they are part of CRDB's diagnostic vocabulary.

SHAPE output presents operations as a flat sequence under the statement root rather than as named lifecycle phases (statement / pre-commit / post-commit). The phase boundary is implicit in the ordering of `execute N system table mutations transactions` markers.

type ExecuteOptions

type ExecuteOptions struct {
	Mode    safety.Mode
	MaxRows int
}

ExecuteOptions configures a single Execute call. Mode determines the transaction shape (read-only wrapper, sql_safe_updates, etc.); the AST allowlist gate in internal/safety must already have admitted the statement under this mode before Execute is called.

MaxRows caps the number of rows scanned into ExecuteResult.Rows. Hitting the cap sets Truncated=true and stops the scan early; the statement still runs to completion on the cluster (so any side effects of a DML … RETURNING are not undone). A zero or negative value disables the cap entirely.

type ExecuteResult

type ExecuteResult struct {
	Columns       []ColumnMeta `json:"columns,omitempty"`
	Rows          [][]any      `json:"rows,omitempty"`
	RowsReturned  int          `json:"rows_returned"`
	RowsAffected  int64        `json:"rows_affected"`
	CommandTag    string       `json:"command_tag,omitempty"`
	LimitInjected *int         `json:"limit_injected,omitempty"`
	Truncated     bool         `json:"truncated,omitempty"`
}

ExecuteResult is the structured payload from Manager.Execute.

Lifecycle: built once per call by runExecute on the success path; consumed once by cmd/exec.go (or the MCP handler) to populate Envelope.Data. The fields split along three axes:

  • Result-set shape: Columns + Rows + RowsReturned describe what the cluster handed back. For DML without RETURNING, Columns is empty and Rows is nil — callers should check len(Columns) to decide between "tabular" and "command" rendering.

  • Side-effect summary: RowsAffected and CommandTag mirror the pgwire CommandTag (e.g. "INSERT 0 5"). Always populated, even for SELECTs (RowsAffected matches RowsReturned in that case) and even on the truncation path (runExecute closes the rows handle before reading the tag, so the cluster reports the authoritative count regardless of how many rows we scanned).

  • Guardrail telemetry: LimitInjected is non-nil when the caller ran safety.MaybeInjectLimit and a LIMIT was added; Truncated is true when row scanning hit MaxRows and stopped early. Both let an agent reason about whether the response is complete.

type ExplainAnyResult

type ExplainAnyResult struct {
	Strategy Strategy          `json:"strategy"`
	Plan     *ExplainResult    `json:"plan,omitempty"`
	DDLPlan  *DDLExplainResult `json:"ddl_plan,omitempty"`
}

ExplainAnyResult is the discriminated union returned by ExplainAny. Strategy names which EXPLAIN flavor the dispatcher chose; on a successful call exactly one populated pointer is returned (mirroring SimulateStep's per-step result shape so agents that already parse simulate output know how to read this).

strategy = "explain"     → Plan is set    (ExplainResult), DDLPlan is nil.
strategy = "explain_ddl" → DDLPlan is set (DDLExplainResult), Plan is nil.

On failure ExplainAny returns the zero value plus a non-nil error, so renderers should only read Plan / DDLPlan after checking err.

type ExplainResult

type ExplainResult struct {
	Header  map[string]string `json:"header,omitempty"`
	Plan    []PlanNode        `json:"plan"`
	RawRows []string          `json:"raw_rows"`
}

ExplainResult is the structured form of a default `EXPLAIN <stmt>`.

Header captures the leading `key: value` rows that appear before the operator tree (typically distribution and vectorized). Plan is the parsed operator forest. RawRows is the original tabular output the cluster returned, retained so the CLI text mode can render the plan exactly as `cockroach sql` would and so agents can re-parse if they need to. ExplainResult is only constructed on the success path; any failure (query, scan, parse) returns the zero value plus an error.

type ListOptions

type ListOptions struct {
	IncludeSystem bool
}

ListOptions controls which schemas ListTablesFromCluster returns. The zero value excludes the system schemas listed in systemSchemas (pg_catalog, crdb_internal, information_schema, system), which is the right default for an agent enumerating a user database. Setting IncludeSystem=true returns every schema, intended as an escape hatch for users debugging catalog visibility.

type Manager

type Manager struct {
	// contains filtered or unexported fields
}

Manager manages a lazy pgwire connection to a CockroachDB cluster. It stores a DSN at construction time and defers the actual TCP handshake until the first method that needs a live connection.

The dsn field is unexported and the type intentionally has no Stringer or GoStringer implementation, so accidental logging via %v or %+v cannot leak credentials.

The stmtTimeout field is the SET LOCAL statement_timeout applied inside the transaction wrapper used by every Explain / ExplainDDL / Execute call. It is set once at construction (default DefaultStatementTimeout) via WithStatementTimeout; the Manager is not safe for concurrent use, so the field never needs synchronisation.

func NewManager

func NewManager(dsn string, opts ...Option) *Manager

NewManager creates a Manager that will connect to the cluster identified by dsn on first use. It does not validate or parse the DSN — invalid values surface as connection errors on first use.

Options are applied in order; later options override earlier ones. Callers that pass no options get DefaultStatementTimeout for the txn-wrapper guardrail used by Explain, ExplainDDL, and Execute.

func (*Manager) Close

func (m *Manager) Close(ctx context.Context) error

Close closes the underlying connection if one was established. It is safe to call on a Manager that never connected.

func (*Manager) DescribeTableFromCluster

func (m *Manager) DescribeTableFromCluster(ctx context.Context, tableName string) (catalog.Table, error)

DescribeTableFromCluster fetches and parses the CREATE statement for a single table. tableName may be unqualified ("users") or qualified ("public.users"); a three-part `db.schema.table` is rejected because the Manager intentionally only sees its DSN's database.

Resolution always goes through information_schema first so the cluster-stored case for the schema and table names is used in the subsequent SHOW CREATE TABLE — this is what makes lookups case-insensitive even when CRDB stored the identifier in mixed case (e.g. via CREATE TABLE "Users"). Zero matches returns ErrTableNotFound; for unqualified names, multiple matches return *AmbiguousTableError (which unwraps to ErrAmbiguousTable) so the caller can render the candidate schema list.

SHOW CREATE TABLE then returns the reconstructed DDL; that DDL is fed back through catalog.Load so the returned catalog.Table has the same shape as the schema-file path produces, and the existing CLI/MCP renderers consume both paths uniformly.

Recovery contract: on query/scan/parse failures *other than* ErrTableNotFound and *AmbiguousTableError, the underlying connection is closed and the Manager reverts to its pre-connect state (mirroring Explain's recovery contract). The two resolution-result errors are deliberately exempt so a "did you mean?" retry — which is a normal user flow on the CLI — does not have to pay for a re-dial.

func (*Manager) Execute

func (m *Manager) Execute(ctx context.Context, sql string, opts ExecuteOptions) (ExecuteResult, error)

Execute runs sql against the cluster and returns its rows + command tag wrapped in an ExecuteResult. The mode in opts selects the txn shape; the safety package is the only acceptable source of truth for "is this statement permitted under this mode" — Execute does not re-classify, it only enforces the cluster-side guardrails (read-only txn for read_only, sql_safe_updates for safe_write, statement timeout for all modes).

On any begin/exec/query/scan failure after a successful connect, the underlying connection is closed and the Manager reverts to its pre-connect state, mirroring Explain's recovery contract.

func (*Manager) Explain

func (m *Manager) Explain(ctx context.Context, sql string) (ExplainResult, error)

Explain runs `EXPLAIN <sql>` against the cluster and returns the parsed plan tree alongside the raw tabular output.

EXPLAIN (without ANALYZE) does not execute the wrapped statement, but as defense-in-depth the call still runs inside a BEGIN READ ONLY transaction with SET LOCAL statement_timeout = m.stmtTimeout. The txn guarantees that any future shape change (e.g. an EXPLAIN flavor that does write) cannot escape the read-only sandbox at this layer. The companion AST allowlist in internal/safety is the first line of defense and runs before this method is reached. Cluster errors (syntax in the wrapped statement, perm denied, etc.) are returned wrapped; callers surface them as generic envelope errors today. SQLSTATE-aware enrichment for pgwire errors is a future enhancement, not a current contract.

On any begin/exec/query/scan/parse failure after a successful connect, the underlying connection is closed and the Manager reverts to its pre-connect state, mirroring Ping's recovery contract.

func (*Manager) ExplainAnalyze

func (m *Manager) ExplainAnalyze(ctx context.Context, sql string) (ExplainResult, error)

ExplainAnalyze runs `EXPLAIN ANALYZE <sql>` against the cluster and returns the parsed plan tree alongside the raw tabular output. EXPLAIN ANALYZE physically executes the wrapped statement, so the returned Plan carries measured runtime stats (rows read, network bytes, execution time) rather than the optimizer's estimates.

Caller contract: sql must be a SELECT (or other read-only DML shape — VALUES, WITH, SHOW). The dispatcher in Simulate enforces this by routing CanWriteData statements to plain Explain instead. As defense in depth, the call still runs inside BEGIN READ ONLY with a SET LOCAL statement_timeout, so any write that reaches this method is rejected by the cluster with SQLSTATE 25006 and the timeout caps slow plans.

On any begin/exec/query/scan/parse failure after a successful connect, the underlying connection is closed and the Manager reverts to its pre-connect state, mirroring Explain's recovery contract.

func (*Manager) ExplainAny

func (m *Manager) ExplainAny(ctx context.Context, sql string) (ExplainAnyResult, error)

ExplainAny dispatches a single SQL statement to the right EXPLAIN flavor and returns the result keyed by Strategy. SELECT and DML route to plain `EXPLAIN <stmt>` (Manager.Explain); DDL routes to `EXPLAIN (DDL, SHAPE) <stmt>` (Manager.ExplainDDL). Neither path executes the wrapped statement.

Caller contract:

  • sql must contain exactly one statement. Multi-statement input is rejected with a clear error so the caller migrates to Simulate instead of silently dropping all but the first.
  • safety.Check(safety.OpExplain, ...) must have run upstream. ExplainAny does not re-validate statement classes; a TCL/DCL input that bypasses the safety gate surfaces here as a "no route" error, which matches simulate's defense-in-depth posture rather than misrouting to one of the two backend methods.

On any error ExplainAny returns the zero result; the underlying Manager.Explain / Manager.ExplainDDL handle their own connection-recovery sequence so a per-call failure does not leave the Manager in a half-open state.

func (*Manager) ExplainDDL

func (m *Manager) ExplainDDL(ctx context.Context, sql string) (DDLExplainResult, error)

ExplainDDL runs `EXPLAIN (DDL, SHAPE) <sql>` against the cluster and returns the parsed schema-change plan alongside the raw text the cluster returned.

EXPLAIN (DDL, SHAPE) does not execute the wrapped DDL — it only asks the declarative schema changer to compile a plan. The call runs inside a transaction so SET LOCAL statement_timeout = m.stmtTimeout applies, but the txn is NOT opened in pgx.ReadOnly mode: the cluster rejects `EXPLAIN (DDL, SHAPE) <ddl>` inside a read-only txn with SQLSTATE 25006 ("cannot execute <ddl-tag> in a read-only transaction") because the txn-mode check fires on the inner stmt type before the SHAPE-only flag is consulted. The AST-layer allowlist in internal/safety is the first line of defense for rejecting unwanted DDL; statement_timeout is the second.

On any begin/exec/query/scan/parse failure after a successful connect, the underlying connection is closed and the Manager reverts to its pre-connect state, mirroring Explain's recovery contract.

func (*Manager) GetTableStats

func (m *Manager) GetTableStats(ctx context.Context, schema, table string) (TableStat, error)

GetTableStats returns the most recent row-count statistic for the table at (schema, table). The source is SHOW STATISTICS, which CRDB auto-collects: typically there is one row per (schema, table, column-set) and the row with the latest `created` timestamp wins. We pick the highest row_count across all column sets — every column set's stats sample the same physical table, so they should agree, and picking the max is a forgiving choice when a single set is briefly stale.

An empty schema is allowed and means "resolve against the connection's search_path." That matches how the same SQL would behave if the user ran the DDL in this connection, which is the resolution the simulation should report against. Callers that need a deterministic schema should pass it explicitly.

Returns a zero TableStat (no error) when stats have not been collected yet — that is the common case for freshly created tables and is not an error condition the caller needs to distinguish from a missing table. A non-nil error is reserved for actual cluster failures.

func (*Manager) ListTablesFromCluster

func (m *Manager) ListTablesFromCluster(ctx context.Context, opts ListOptions) ([]TableRef, error)

ListTablesFromCluster returns user tables in the connection's current database, ordered by (schema, name). The slice is always non-nil so the JSON encoder emits `[]` rather than `null`. Whether system schemas are included is controlled by opts.IncludeSystem.

On any query/scan failure after a successful connect, the underlying connection is closed and the Manager reverts to its pre-connect state, mirroring the recovery contract documented on Ping/Explain.

func (*Manager) Ping

func (m *Manager) Ping(ctx context.Context) (ClusterInfo, error)

Ping connects to the cluster (if not already connected) and returns the cluster ID and CockroachDB version. It is the primary entry point for the lazy-connect lifecycle: callers that only need to verify connectivity call Ping and inspect the returned ClusterInfo.

If the query fails after a successful connect, the connection is closed and the Manager reverts to its pre-connect state. Callers do not need to distinguish partial failures from connection failures — either way, the next Ping will attempt a fresh connect.

func (*Manager) Simulate

func (m *Manager) Simulate(ctx context.Context, sql string) (SimulateResult, error)

Simulate parses sql, dispatches each statement to the appropriate non-executing EXPLAIN flavor, and returns the per-statement outcomes. The dispatcher is what makes simulate side-effect free at the cluster level: SELECT runs through EXPLAIN ANALYZE (read only by construction), DML writes through plain EXPLAIN (no execution), and DDL through EXPLAIN (DDL, SHAPE) (no execution plus an optional row-count annotation from SHOW STATISTICS).

Per-statement errors land on the step rather than the method: plan failures populate step.Error, stats-only failures populate step.StatsError. Subsequent steps still run regardless. Errors that abort the whole call (parse failure, initial connect) are returned as the method-level error.

Caller contract: safety.Check(safety.OpSimulate, ...) must have been called before Simulate. Simulate does not re-validate statement classes. nested EXPLAIN must be rejected upstream because tree.CanWriteData does not descend into Explain wrappers — the dispatcher would misclassify a wrapped write as a SELECT and route it to EXPLAIN ANALYZE. TCL and DCL would surface as a per-step "no route" error in the default branch, which is actionable but not the intended path. Callers must walk Steps for per-step Error/StatsError values; a nil method-level error does not mean every statement succeeded.

type Option

type Option func(*Manager)

Option configures a Manager at construction time. Implemented via the functional-options pattern so future knobs (e.g. application_name, retry budget) extend the API without breaking call sites.

func WithStatementTimeout

func WithStatementTimeout(d time.Duration) Option

WithStatementTimeout overrides the per-call statement_timeout applied inside the transaction wrapper used by Explain, ExplainDDL, and Execute. A non-positive value falls back to DefaultStatementTimeout so callers cannot accidentally disable the guardrail by passing a zero duration.

type PlanNode

type PlanNode struct {
	Op       string            `json:"op"`
	Attrs    map[string]string `json:"attrs,omitempty"`
	Children []PlanNode        `json:"children,omitempty"`
}

PlanNode is one operator in a CockroachDB EXPLAIN tree. Op is the operator name with the leading bullet stripped (e.g. "scan", "filter", "render"). Attrs holds the per-operator key/value attributes the planner emits underneath the operator (e.g. "table": "t@primary", "spans": "FULL SCAN"). Children are the direct child operators in execution order, following EXPLAIN's tree-drawing glyphs.

Both Attrs and Children use omitempty so leaf nodes and attribute-free operators serialize to a compact JSON shape.

type SimulateResult

type SimulateResult struct {
	Steps []SimulateStep `json:"steps"`
}

SimulateResult is the JSON-serialisable payload returned by Manager.Simulate. One Steps entry per parsed statement, in statement order. Errors that scope to a single statement live on the step (Step.Error / Step.StatsError); any failure that aborts the whole simulation (parse error, connect error) is returned as the method-level error instead.

func (SimulateResult) StepFailureSummary

func (r SimulateResult) StepFailureSummary() (msg string, planFails, statsFails []int, ok bool)

StepFailureSummary scans every Step and returns a one-line summary of any per-step failures plus the indices that carried each failure class. Returns ok=false when every step succeeded — both surfaces (CLI and MCP) use that to decide whether to promote the failure into an envelope-level entry. Keeping the summary close to the data types means new step-level error classes can be added in one place rather than duplicated across surfaces.

type SimulateStep

type SimulateStep struct {
	StatementIndex int      `json:"statement_index"`
	Tag            string   `json:"tag"`
	Strategy       Strategy `json:"strategy"`
	SQL            string   `json:"sql"`

	Plan       *ExplainResult    `json:"plan,omitempty"`
	DDLPlan    *DDLExplainResult `json:"ddl_plan,omitempty"`
	TableStats []TableStat       `json:"table_stats,omitempty"`

	Error      string `json:"error,omitempty"`
	StatsError string `json:"stats_error,omitempty"`
}

SimulateStep records the simulated outcome for a single statement. Exactly one of Plan / DDLPlan is non-nil on success, selected by Strategy:

explain_analyze | explain → Plan is set, DDLPlan is nil.
explain_ddl              → DDLPlan is set, Plan is nil.

On a plan failure (cluster rejected EXPLAIN, statement timeout, connection drop after the dispatch began), Plan and DDLPlan are both nil and Error carries the message. On a stats-only failure (DDL plan succeeded but SHOW STATISTICS errored for one of the affected tables), DDLPlan stays populated and StatsError carries the lookup failure — keeping Error reserved for plan-blocking problems lets renderers surface the plan and the stats failure independently. Subsequent steps still run regardless of which field is set; failures do not abort the simulation.

type Strategy

type Strategy string

Strategy names the EXPLAIN flavor a SimulateStep used to render its no-execute view of a single statement. The values are wire-stable tokens — agents branch on them to decide which payload field to read (Plan vs DDLPlan) and how to interpret the numbers (estimates vs measured stats).

const (
	// StrategyExplainAnalyze runs `EXPLAIN ANALYZE <stmt>`. Used for
	// SELECT, where actual execution is harmless and the runtime
	// stats (rows read, network bytes, time) are far more useful
	// than the optimizer's estimates.
	StrategyExplainAnalyze Strategy = "explain_analyze"

	// StrategyExplain runs plain `EXPLAIN <stmt>`. Used for DML
	// writes (INSERT/UPDATE/DELETE/UPSERT) where ANALYZE would
	// persist data. Returns the optimizer's estimated plan only —
	// no execution, no side effects.
	StrategyExplain Strategy = "explain"

	// StrategyExplainDDL runs `EXPLAIN (DDL, SHAPE) <stmt>`. Used
	// for DDL. Returns the declarative schema changer's compiled
	// plan; the cluster does not execute the schema change.
	StrategyExplainDDL Strategy = "explain_ddl"
)

Strategy values.

type TLSParams

type TLSParams struct {
	SSLMode     string
	SSLRootCert string
	SSLCert     string
	SSLKey      string
}

TLSParams holds the four libpq TLS knobs that callers may supply through CLI flags or MCP tool inputs separately from the DSN. All fields are optional; an empty TLSParams is a no-op when merged.

The field names shadow libpq URI parameters of the same spelling (sslmode / sslrootcert / sslcert / sslkey); pgx honors them via the standard `?sslmode=...&sslrootcert=...` query syntax, so MergeTLSParams' only job is to splice non-empty fields into the DSN's query string and detect conflicts with values already present.

func (TLSParams) IsZero

func (p TLSParams) IsZero() bool

IsZero reports whether p has no fields set. Provided so callers can skip MergeTLSParams' parse step when no TLS knobs were supplied without duplicating the field-by-field check.

type TableRef

type TableRef struct {
	Schema string `json:"schema"`
	Name   string `json:"name"`
}

TableRef identifies a table by its schema and name. Returned by ListTablesFromCluster so callers can render qualified names without re-querying when the listing spans multiple schemas in the same database.

type TableStat

type TableStat struct {
	Schema      string `json:"schema"`
	Table       string `json:"table"`
	RowCount    int64  `json:"row_count"`
	Source      string `json:"source"`
	CollectedAt string `json:"collected_at,omitempty"`
}

TableStat is a row-count estimate for a table touched by a simulated DDL. Sourced from `SHOW STATISTICS` (auto-collected by CRDB), so the numbers reflect the most recent stats refresh rather than a live count. CollectedAt lets callers reason about staleness; an empty CollectedAt means stats have never been collected for this table — RowCount in that case is the zero value and is not meaningful.

The slice on SimulateStep is omitted from the JSON payload (via `omitempty`) when the DDL has no extractable target table or when stats lookup failed for every target. Callers should treat both "absent slice" and "non-empty slice with empty CollectedAt" as "no information available," not as "row count = 0".

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL