Documentation
¶
Overview ¶
Package sqlr is designed to reduce the effort required to implement common operations performed with SQL databases. It is intended for programmers who are comfortable with writing SQL, but would like assistance with the sometimes tedious process of preparing SQL queries for tables that have a large number of columns, or have a variable number of input parameters.
This GoDoc summary provides an overview of how to use this package. For more detailed documentation, see https://jjeffery.github.io/sqlr.
Prepare SQL queries based on row structures ¶
Preparing SQL queries with many placeholder arguments is tedious and error-prone. The following insert query has a dozen placeholders, and it is difficult to match up the columns with the placeholders. It is not uncommon to have tables with many more columns than this example, and the level of difficulty increases with the number of columns in the table.
insert into users(id,given_name,family_name,dob,ssn,street,locality,postcode, country,phone,mobile,fax) values(?,?,?,?,?,?,?,?,?,?,?,?)
This package uses reflection to simplify the construction of SQL queries. Supplementary information about each database column is stored in the structure tag of the associated field.
type User struct {
ID int `sql:"primary key"`
GivenName string
FamilyName string
DOB time.Time
SSN string
Street string
Locality string
Postcode string
Country string
Phone string
Mobile string
Facsimile string `sql:"fax"` // "fax" overrides the column name
}
The calling program creates a schema, which describes rules for generating SQL statements. These rules include specifying the SQL dialect (eg MySQL, Postgres, SQLite) and the naming convention used to convert Go struct field names into column names (eg "GivenName" => "given_name"). The schema is usually created during program initialization. Once created, a schema is immutable and can be called concurrently from multiple goroutines.
schema := NewSchema( WithDialect(MySQL), WithNamingConvention(SnakeCase), )
Once the schema has been defined and a database handle is available (eg *sql.DB, *sql.Tx), it is possible to create simple row insert/update/delete statements with minimal effort.
var row User
// ... populate row with data here and then ...
// generates the correct SQL to insert a row into the users table
rowsAffected, err := schema.Exec(db, row, "insert into users({}) values({})")
// ... and then later on ...
// generates the correct SQL to update a the matching row in the users table
rowsAffected, err := schema.Exec(db, row, "update users set {} where {}")
The Exec method parses the SQL query and replaces occurrences of "{}" with the column names or placeholders that make sense for the SQL clause in which they occur. In the example above, the insert and update statements would look like:
insert into users(`id`,`given_name`,`family_name`,`dob`,`ssn`,`street`, `locality`,`postcode`,`country`,`phone`,`mobile`,`fax`) values(?,?,?,?, ?,?,?,?,?,?,?,?) update users set `given_name`=?,`family_name`=?,`dob`=?,`ssn`=?,`street`=?, `locality`=?,`postcode`=?,`country`=?,`phone`=?,`mobile`=?,`fax`=? where `id`=?
If the schema is created with a different dialect then the generated SQL will be different. For example if the Postgres dialect was used the insert and update queries would look more like:
insert into users("id","given_name","family_name","dob","ssn","street","locality",
"postcode","country","phone","mobile","fax") values($1,$2,$3,$4,$5,$6,$7,$8,$9,
$10,$11,$12)
update users set "given_name"=$1,"family_name"=$2,"dob"=$3,"ssn"=$4,"street"=$5,
"locality"=$6,"postcode"=$7,"country"=$8,"phone"=$9,"mobile"=$10,"fax"=$11
where "id"=$12
Select queries are handled in a similar fashion:
var rows []*User
// will populate rows slice with the results of the query
rowCount, err := schema.Select(db, &rows, "select {} from users where postcode = ?", postcode)
var row User
// will populate row with the first row returned by the query
rowCount, err = schema.Select(db, &row, "select {} from users where {}", userID)
// more complex query involving joins and aliases
rowCount, err = schema.Select(db, &rows, `
select {alias u}
from users u
inner join user_search_terms ust on ust.user_id = u.id
where ust.search_term like ?
order by {alias u}`, searchTermText + "%")
The SQL queries prepared in the above example would look like the following:
select `id`,`given_name`,`family_name`,`dob`,`ssn`,`street`,`locality`, `postcode`,`country`,`phone`,`mobile`,`fax` from users where postcode=? select `id`,`given_name`,`family_name`,`dob`,`ssn`,`street`,`locality`, `postcode`,`country`,`phone`,`mobile`,`fax` from users where id=? select u.`id`,u.`given_name`,u.`family_name`,u.`dob`,u.`ssn`,u.`street`, u.`locality`,u.`postcode`,u.`country`,u.`phone`,u.`mobile`,u.`fax` from users u inner join user_search_terms ust on ust.user_id = u.id where ust.search_term_like ? order by u.`id`
The examples are using a MySQL dialect. If the schema had been setup for, say, a Postgres dialect, a generated query would look more like:
select "id","given_name","family_name","dob","ssn","street","locality", "postcode","country","phone","mobile","fax" from users where postcode=$1
It is an important point to note that this feature is not about writing the SQL for the programmer. Rather it is about "filling in the blanks": allowing the programmer to specify as much of the SQL query as they want without having to write the tiresome bits.
Autoincrement Column Values ¶
When inserting rows, if a column is defined as an autoincrement column, then the generated value will be retrieved from the database server, and the corresponding field in the row structure will be updated.
type Row {
ID int `sql:"primary key autoincrement"`
Name string
}
row := &Row{Name: "some name"}
_, err := schema.Exec(db, row, "insert into table_name({}) values({})")
if err != nil {
log.Fatal(err)
}
// row.ID will contain the auto-generated value
fmt.Println(row.ID)
This feature only works with database drivers that support autoincrement columns. The Postgres driver ("github.com/lib/pq"), in particular, does not support this feature.
Null Columns ¶
Most SQL database tables have columns that are nullable, and it can be tiresome to always map to pointer types or special nullable types such as sql.NullString. In many cases it is acceptable to map a database NULL value to the empty value for the corresponding Go struct field. It is not always acceptable, but experience has shown that it is a common enough situation.
Where it is acceptable to map a NULL value to an empty value and vice-versa, the Go struct field can be marked with the "null" keyword in the field's struct tag.
type Employee struct {
ID int `sql:"primary key"`
Name string
ManagerID int `sql:"null"`
Phone string `sql:"null"`
}
In the above example the `manager_id` column can be null, but if all valid IDs are non-zero, it is unambiguous to map a database NULL to the zero value. Similarly, if the `phone` column is null it will be mapped to an empty string. An empty string in the Go struct field will be mapped to NULL in the database.
Care should be taken, because there are cases where an empty value and a database NULL do not represent the same thing. There are many cases, however, where this feature can be applied, and the result is simpler code that is easier to read.
JSON Columns ¶
It is not uncommon to serialize complex objects as JSON text for storage in an SQL database. Native support for JSON is available in some database servers: in partcular Postgres has excellent support for JSON.
It is straightforward to use this package to serialize a structure field to JSON:
type SomethingComplex struct {
Name string
Values []int
MoreValues map[string]float64
// ... and more fields here ...
}
type Row struct {
ID int `sql:"primary key"`
Name string
Cmplx *SomethingComplex `sql:"json"`
}
In the example above the `Cmplx` field will be marshaled as JSON text when writing to the database, and unmarshaled into the struct when reading from the database.
WHERE IN Clauses with Multiple Values ¶
While most SQL queries accept a fixed number of parameters, if the SQL query contains a `WHERE IN` clause, it requires additional string manipulation to match the number of placeholders in the query with args.
This package simplifies queries with a variable number of arguments. When processing an SQL query, it detects if any of the arguments are slices:
// GetWidgets returns all the widgets associated with the supplied IDs.
func GetWidgets(db *sql.DB, ids ...int) ([]*Widget, error) {
var rows []*Widget
_, err := schema.Select(db, &rows, `select {} from widgets where id in (?)`, ids)
if err != nil {
return nil, err
}
return widgets, nil
}
In the above example, the number of placeholders ("?") in the query will be increased to match the number of values in the `ids` slice. The expansion logic can handle any mix of slice and scalar arguments.
Code Generation ¶
This package contains a code generation tool in the "./cmd/sqlr-gen" directory. It can be quite useful to reduce the amount of code required. Refer to the detailed documentation at https://jjeffery.github.io/sqlr for more information on this feature.
Performance and Caching ¶
This package makes use of reflection in order to build the SQL that is sent to the database server, and this imposes a performance penalty. In order to reduce this overhead each schema instance caches queries generated. The goal is for queries generated by this package to have performance as close as possible to equivalent hand-constructed SQL queries that call package "database/sql" directly.
Source Code ¶
More information about this package can be found at https://github.com/jjeffery/sqlr.
Example ¶
package main
import (
"database/sql"
"fmt"
"log"
"os"
_ "github.com/mattn/go-sqlite3"
)
// The UserRow struct represents a single row in the users table.
// Note that this package becomes more useful when tables
// have many more columns than shown in this example.
type UserRow struct {
ID int64 `sql:"primary key autoincrement"`
GivenName string
FamilyName string
}
func main() {
db, err := sql.Open("sqlite3", ":memory:")
exitIfError(err)
setupSchema(db)
tx, err := db.Begin()
exitIfError(err)
defer tx.Rollback()
schema := NewSchema(ForDB(db))
// insert three rows, IDs are automatically generated (1, 2, 3)
for _, givenName := range []string{"John", "Jane", "Joan"} {
u := &UserRow{
GivenName: givenName,
FamilyName: "Citizen",
}
_, err = schema.Exec(tx, u, `insert into users`)
exitIfError(err)
}
// get user with ID of 3 and then delete it
{
var u UserRow
_, err = schema.Select(tx, &u, `select users`, 3)
exitIfError(err)
_, err = schema.Exec(tx, u, `delete from users where {}`)
exitIfError(err)
}
// update family name for user with ID of 2
{
var u UserRow
_, err = schema.Select(tx, &u, `select users`, 2)
exitIfError(err)
u.FamilyName = "Doe"
_, err = schema.Exec(tx, u, `update users`)
exitIfError(err)
}
// select rows from table and print
{
var users []*UserRow
_, err = schema.Select(tx, &users, `
select {}
from users
order by id
limit ? offset ?`, 100, 0)
exitIfError(err)
for _, u := range users {
fmt.Printf("User %d: %s, %s\n", u.ID, u.FamilyName, u.GivenName)
}
}
}
func exitIfError(err error) {
if err != nil {
log.Output(2, err.Error())
os.Exit(1)
}
}
func init() {
log.SetFlags(log.Lshortfile)
}
func setupSchema(db *sql.DB) {
_, err := db.Exec(`
create table users(
id integer primary key autoincrement,
given_name text,
family_name text
)
`)
exitIfError(err)
}
Output: User 1: Citizen, John User 2: Doe, Jane
Index ¶
- type DBdeprecated
- type Dialect
- type NamingConvention
- type Querier
- type Schema
- func (s *Schema) Clone(opts ...SchemaOption) *Schema
- func (s *Schema) Exec(db Querier, row interface{}, sql string, args ...interface{}) (int, error)
- func (s *Schema) Key() string
- func (s *Schema) Prepare(row interface{}, query string) (*Stmt, error)
- func (s *Schema) Select(db Querier, rows interface{}, sql string, args ...interface{}) (int, error)
- type SchemaOption
- func ForDB(db *sql.DB) SchemaOption
- func WithDialect(dialect Dialect) SchemaOption
- func WithField(fieldName string, columnName string) SchemaOption
- func WithIdentifier(identifier string, meaning string) SchemaOption
- func WithKey(key string) SchemaOption
- func WithNamingConvention(convention NamingConvention) SchemaOption
- type Stmt
Examples ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Dialect ¶
type Dialect interface {
// Quote a table name or column name so that it does
// not clash with any reserved words. The SQL-99 standard
// specifies double quotes (eg "table_name"), but many
// dialects, including MySQL use the backtick (eg `table_name`).
// SQL server uses square brackets (eg [table_name]).
Quote(column string) string
// Return the placeholder for binding a variable value.
// Most SQL dialects support a single question mark (?), but
// PostgreSQL uses numbered placeholders (eg $1).
Placeholder(n int) string
}
Dialect is an interface used to handle differences in SQL dialects.
var ( Postgres Dialect // Quote: "column_name", Placeholders: $1, $2, $3 MySQL Dialect // Quote: `column_name`, Placeholders: ?, ?, ? MSSQL Dialect // Quote: [column_name], Placeholders: ?, ?, ? SQLite Dialect // Quote: `column_name`, Placeholders: ?, ?, ? ANSISQL Dialect // Quote: "column_name", Placeholders: ?, ?, ? )
Pre-defined dialects
var DefaultDialect Dialect // Depends on the DB driver loaded.
DefaultDialect is the dialect used by a schema if none is specified. It is chosen from the first driver in the list of drivers returned by the sql.Drivers() function.
Many programs only load one database driver, and in this case the default dialect should be the correct choice.
type NamingConvention ¶
type NamingConvention interface {
// Convert converts a Go struct field name according to the naming convention.
Convert(fieldName string) string
// Join joins two or more converted names to form a column name.
// Used for naming columns based on fields within embedded
// structures.
Join(names []string) string
}
The NamingConvention interface provides methods that are used to infer a database column name from its associated Go struct field.
var ( SnakeCase NamingConvention // eg "FieldName" -> "field_name" SameCase NamingConvention // eg "FieldName" -> "FieldName" LowerCase NamingConvention // eg "FieldName" -> "fieldname" )
Pre-defined naming conventions. If a naming convention is not specified for a schema, it defaults to snake_case.
type Querier ¶
type Querier interface {
// Exec executes a query without returning any rows.
// The args are for any placeholder parameters in the query.
Exec(query string, args ...interface{}) (sql.Result, error)
// Query executes a query that returns rows, typically a SELECT.
// The args are for any placeholder parameters in the query.
Query(query string, args ...interface{}) (*sql.Rows, error)
}
The Querier interface defines the SQL database access methods used by this package.
The *DB and *Tx types in the standard library package "database/sql" both implement this interface.
type Schema ¶
type Schema struct {
// contains filtered or unexported fields
}
Schema contains information about the database that is used when generating SQL statements.
Information stored in the schema includes the SQL dialect, and the naming convention used to convert Go struct field names into database column names.
Although the zero value schema can be used and represents a database schema with default values, it is more common to use the NewSchema function to create a schema with one or more options.
A schema maintains an internal cache, which is used to store details of frequently called SQL commands for improved performance.
A schema can be inexpensively cloned to provide a deep copy. This can occasionally be useful to define a common schema for a database, and then create copies to handle naming rules that are specific to a particular table, or a particular group of tables.
func NewSchema ¶
func NewSchema(opts ...SchemaOption) *Schema
NewSchema creates a schema with options.
func (*Schema) Clone ¶
func (s *Schema) Clone(opts ...SchemaOption) *Schema
Clone creates a copy of the schema, with options applied.
func (*Schema) Exec ¶
Exec executes the query with the given row and optional arguments. It returns the number of rows affected by the statement.
If the statement is an INSERT statement and the row has an auto-increment field, then the row is updated with the value of the auto-increment column, as long as the SQL driver supports this functionality.
func (*Schema) Prepare ¶
Prepare creates a prepared statement for later queries or executions. Multiple queries or executions may be run concurrently from the returned statement.
Example ¶
type UserRow struct {
ID int `sql:"primary key autoincrement"`
GivenName string
FamilyName string
}
// Define different schemas for different dialects and naming conventions
mssql := NewSchema(
WithDialect(MSSQL),
WithNamingConvention(SameCase),
)
mysql := NewSchema(
WithDialect(MySQL),
WithNamingConvention(LowerCase),
)
postgres := NewSchema(
WithDialect(Postgres),
WithNamingConvention(SnakeCase),
)
// for each schema, print the SQL generated for each statement
for _, schema := range []*Schema{mssql, mysql, postgres} {
stmt, err := schema.Prepare(UserRow{}, `insert into users({}) values({})`)
if err != nil {
log.Fatal(err)
}
fmt.Println(stmt)
}
Output: insert into users([GivenName],[FamilyName]) values(?,?) insert into users(`givenname`,`familyname`) values(?,?) insert into users("given_name","family_name") values($1,$2)
func (*Schema) Select ¶
Select executes a SELECT query and stores the result in rows. The argument passed to rows can be one of the following:
A pointer to an array of structs; or a pointer to an array of struct pointers; or a pointer to a struct.
When rows is a pointer to an array it is populated with one item for each row returned by the SELECT query.
When rows is a pointer to a struct, it is populated with the first row returned from the query. This is a good option when the query will only return one row.
Select returns the number of rows returned by the SELECT query.
Example (MultipleRows) ¶
type UserRow struct {
ID int `sql:"primary key autoincrement"`
GivenName string
FamilyName string
}
// Schema for an MSSQL database, where column names
// are the same as the Go struct field names.
mssql := NewSchema(
WithDialect(MSSQL),
WithNamingConvention(SameCase),
)
// find users with search terms
var rows []UserRow
n, err := mssql.Select(db, &rows, `
select {alias u}
from [Users] u
inner join [UserSearchTerms] t on t.UserID = u.ID
where t.SearchTerm like ?
limit ? offset ?`, "smith%", 0, 100)
if err != nil {
log.Fatal(err)
}
if n > 0 {
for i, row := range rows {
log.Printf("row %d: %v", i, row)
}
} else {
log.Printf("not found")
}
Example (OneRow) ¶
type UserRow struct {
ID int `sql:"primary key autoincrement"`
GivenName string
FamilyName string
}
// Schema for an MSSQL database, where column names
// are the same as the Go struct field names.
mssql := NewSchema(
WithDialect(MSSQL),
WithNamingConvention(SameCase),
)
// find user with ID=42
var row UserRow
n, err := mssql.Select(db, &row, `select {} from [Users] where ID=?`, 42)
if err != nil {
log.Fatal(err)
}
if n > 0 {
log.Printf("found: %v", row)
} else {
log.Printf("not found")
}
type SchemaOption ¶
type SchemaOption func(schema *Schema)
A SchemaOption provides optional configuration and is supplied when creating a new Schema, or cloning a Schema.
func ForDB ¶
func ForDB(db *sql.DB) SchemaOption
ForDB creates an option that sets the dialect for the open DB handle.
func WithDialect ¶
func WithDialect(dialect Dialect) SchemaOption
WithDialect provides an option that sets the schema's dialect.
func WithField ¶
func WithField(fieldName string, columnName string) SchemaOption
WithField creates an option that maps a Go field name to a database column name.
It is more common to override column names in the struct tag of the field, but there are some cases where it makes sense to declare column name overrides directly with the schema. One situation is with fields within embedded structures. For example, with the following structures:
type UserRow struct {
Name string
HomeAddress Address
WorkAddress Address
}
type Address struct {
Street string
Locality string
State string
}
If the column name for HomeAddress.Locality is called "home_suburb" for historical reasons, then it is not possible to specify a rename in the structure tag without also affecting the WorkAddress.Locality field. In this situation it is only possible to specify the column name override using the WithField option:
schema := NewSchema(
WithField("HomeAddress.Locality", "home_suburb"),
)
func WithIdentifier ¶
func WithIdentifier(identifier string, meaning string) SchemaOption
WithIdentifier creates an option that performs a global rename of an identifier when preparing SQL queries. This option is not needed very often: its main purpose is for helping a program operate against two different database schemas where table and column names follow a different naming convention.
The example shows a situation where a program operates against an SQL Server database where a table is named "[User]", but the same table is named "users" in the Postgres schema.
Example ¶
// Take an example of a program that operates against an SQL Server
// database where a table is named "[User]", but the same table is
// named "users" in the Postgres schema.
mssql := NewSchema(
WithDialect(MSSQL),
WithNamingConvention(SameCase),
WithIdentifier("[User]", "users"),
WithIdentifier("UserId", "user_id"),
WithIdentifier("[Name]", "name"),
)
postgres := NewSchema(
WithDialect(Postgres),
WithNamingConvention(SnakeCase),
)
type User struct {
UserId int `sql:"primary key"`
Name string
}
// If a statement is prepared and executed for both
const query = "select {} from users where user_id = ?"
mssqlStmt, err := mssql.Prepare(User{}, query)
if err != nil {
log.Fatal(err)
}
fmt.Println(mssqlStmt)
postgresStmt, err := postgres.Prepare(User{}, query)
if err != nil {
log.Fatal(err)
}
fmt.Println(postgresStmt)
Output: select [UserId],[Name] from [User] where UserId = ? select "user_id","name" from users where user_id = $1
func WithKey ¶
func WithKey(key string) SchemaOption
WithKey creates an option that associates the schema with a key in struct field tags. This option is not needed very often: its main purpose is for helping a program operate against two different database schemas.
func WithNamingConvention ¶
func WithNamingConvention(convention NamingConvention) SchemaOption
WithNamingConvention creates and option that sets the schema's naming convention.
type Stmt ¶
type Stmt struct {
// contains filtered or unexported fields
}
Stmt is a prepared statement. A Stmt is safe for concurrent use by multiple goroutines.
func (*Stmt) Exec ¶
Exec executes the prepared statement with the given row and optional arguments. It returns the number of rows affected by the statement.
If the statement is an INSERT statement and the row has an auto-increment field, then the row is updated with the value of the auto-increment column as long as the SQL driver supports this functionality.
Example (Delete) ¶
type UserRow struct {
ID int `sql:"primary key autoincrement"`
GivenName string
FamilyName string
}
schema := NewSchema()
stmt, err := schema.Prepare(UserRow{}, `delete from users where {}`)
if err != nil {
log.Fatal(err)
}
// ... later on ...
row := UserRow{
ID: 42,
GivenName: "John",
FamilyName: "Citizen",
}
_, err = stmt.Exec(db, row)
if err != nil {
log.Fatal(err)
}
Example (Insert) ¶
type UserRow struct {
ID int `sql:"primary key autoincrement"`
GivenName string
FamilyName string
}
schema := NewSchema()
stmt, err := schema.Prepare(UserRow{}, `insert into users({}) values({})`)
if err != nil {
log.Fatal(err)
}
// ... later on ...
row := UserRow{
GivenName: "John",
FamilyName: "Citizen",
}
_, err = stmt.Exec(db, row)
if err != nil {
log.Fatal(err)
}
Example (Update) ¶
type UserRow struct {
ID int `sql:"primary key autoincrement"`
GivenName string
FamilyName string
}
schema := NewSchema()
stmt, err := schema.Prepare(UserRow{}, `update users set {} where {}`)
if err != nil {
log.Fatal(err)
}
// ... later on ...
row := UserRow{
ID: 42,
GivenName: "John",
FamilyName: "Citizen",
}
_, err = stmt.Exec(db, row)
if err != nil {
log.Fatal(err)
}
func (*Stmt) Select ¶
Select executes the prepared query statement with the given arguments and returns the query results in rows. If rows is a pointer to a slice of structs then one item is added to the slice for each row returned by the query. If row is a pointer to a struct then that struct is filled with the result of the first row returned by the query. In both cases Select returns the number of rows returned by the query.
Example (MultipleRows) ¶
type UserRow struct {
ID int `sql:"primary key autoincrement"`
GivenName string
FamilyName string
}
schema := NewSchema()
stmt, err := schema.Prepare(UserRow{}, `
select {alias u}
from users u
inner join user_search_terms t on t.user_id = u.id
where t.search_term like ?
limit ? offset ?`)
if err != nil {
log.Fatal(err)
}
// ... later on ...
// find users with search terms
var rows []UserRow
n, err := stmt.Select(db, &rows, "smith%", 0, 100)
if err != nil {
log.Fatal(err)
}
if n > 0 {
for i, row := range rows {
log.Printf("row %d: %v", i, row)
}
} else {
log.Printf("not found")
}
Example (OneRow) ¶
type UserRow struct {
ID int `sql:"primary key autoincrement"`
GivenName string
FamilyName string
}
schema := NewSchema()
stmt, err := schema.Prepare(UserRow{}, `select {} from users where {}`)
if err != nil {
log.Fatal(err)
}
// ... later on ...
// find user with ID=42
var row UserRow
n, err := stmt.Select(db, &row, 42)
if err != nil {
log.Fatal(err)
}
if n > 0 {
log.Printf("found: %v", row)
} else {
log.Printf("not found")
}
Source Files
¶
Directories
¶
| Path | Synopsis |
|---|---|
|
cmd
|
|
|
sqlr-gen
command
sqlr-gen is a code generation utility for SQL CRUD operations.
|
sqlr-gen is a code generation utility for SQL CRUD operations. |
|
Directory docs contains detailed documentation.
|
Directory docs contains detailed documentation. |
|
Package private and subdirectories have no backward compatibility guarantees.
|
Package private and subdirectories have no backward compatibility guarantees. |
|
codegen
Package codegen provides the code-generation functionality used by the sqlr-gen tool.
|
Package codegen provides the code-generation functionality used by the sqlr-gen tool. |
|
column
Package column extracts database column information from Go struct fields.
|
Package column extracts database column information from Go struct fields. |
|
dialect
Package dialect handles differences in various SQL dialects.
|
Package dialect handles differences in various SQL dialects. |
|
naming
Package naming provides naming conventions used to convert Go struct field names to database columns.
|
Package naming provides naming conventions used to convert Go struct field names to database columns. |
|
scanner
Package scanner implements a simple lexical scanner for SQL statements.
|
Package scanner implements a simple lexical scanner for SQL statements. |
|
wherein
Package wherein expands SQL statements that have placeholders that can accept slices of arguments.
|
Package wherein expands SQL statements that have placeholders that can accept slices of arguments. |