Documentation
¶
Index ¶
- func IsRegexOp(t Token) bool
- func LoadTokenMap(keywordTokens map[Token]string)
- func ScanBareIdent(r io.RuneScanner) string
- func ScanDelimited(r io.RuneScanner, start, end rune, escapes map[rune]rune, escapesPassThru bool) ([]byte, error)
- func ScanString(r io.RuneScanner) (string, error)
- type Pos
- type Scanner
- type Token
- type TokenBuffer
- func (s *TokenBuffer) Current() (tok Token, pos Pos, lit string)
- func (s *TokenBuffer) OneRuneOperators(b bool)
- func (s *TokenBuffer) Peek() (tok Token, pos Pos, lit string)
- func (s *TokenBuffer) PeekRune() rune
- func (s *TokenBuffer) Scan() (tok Token, pos Pos, lit string)
- func (s *TokenBuffer) ScanFunc(scan func() (Token, Pos, string)) (tok Token, pos Pos, lit string)
- func (s *TokenBuffer) ScanRegex() (tok Token, pos Pos, lit string)
- func (s *TokenBuffer) Unscan()
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func LoadTokenMap ¶
LoadTokenMap allows for extra keywords to be added to the lexer
func ScanBareIdent ¶
func ScanBareIdent(r io.RuneScanner) string
ScanBareIdent reads bare identifier from a rune reader.
func ScanDelimited ¶
func ScanString ¶
func ScanString(r io.RuneScanner) (string, error)
ScanString reads a quoted string from a rune reader.
Types ¶
type Pos ¶
Pos specifies the line and character position of a token. The Char and Line are both zero-based indexes.
type Scanner ¶
type Scanner struct { OneRuneOperators bool // only scan one rune operators (+, not ++) // contains filtered or unexported fields }
Scanner represents a lexical scanner for InfluxQL.
func NewScanner ¶
NewScanner returns a new instance of Scanner.
type Token ¶
type Token int
Token represents each lexer symbol
const ( ILLEGAL Token = iota EOF WS LPAREN // ( RPAREN // ) LBRACKET // [ RBRACKET // ] LCURLY // { RCURLY // } COMMA // , SEMICOLON // ; COLON // : DOT // . SINGLEQUOTE // ' DOUBLEQUOTE // " PERCENT // % DOLLAR // $ HASH // # ATSIGN // @ ARROW // -> EQARROW // => IDENT INTEGER DECIMAL STRING BADSTRING BADESCAPE TRUE FALSE REGEX BADREGEX DURATION PLUS // + PLUSPLUS // ++ MINUS // - MINUSMINUS // -- MUL // * DIV // / AMPERSAND // & XOR // ^ PIPE // | LSHIFT // << RSHIFT // >> POW // ** AND // AND OR // OR EQ // = NEQ // != EQEQ // == EQREGEX // =~ NEQREGEX // !~ LT // < LTE // <= GT // > GTE // >= VARARG // ... )
Token enums
func (Token) IsOperator ¶
isOperator returns true for operator tokens.
func (Token) Precedence ¶
Precedence returns the operator precedence of the binary operator token.
type TokenBuffer ¶
type TokenBuffer struct { IgnoreWhitespace bool // contains filtered or unexported fields }
TokenBuffer represents a wrapper for scanner to add a buffer. It provides a fixed-length circular buffer that can be unread.
func NewTokenBuffer ¶
func NewTokenBuffer(r io.Reader) *TokenBuffer
NewTokenBuffer returns a new buffered scanner for a reader.
func (*TokenBuffer) Current ¶
func (s *TokenBuffer) Current() (tok Token, pos Pos, lit string)
curr returns the last read token.
func (*TokenBuffer) OneRuneOperators ¶
func (s *TokenBuffer) OneRuneOperators(b bool)
OneRuneOperators sets the scanner option to ignore multi rune operators.
func (*TokenBuffer) Peek ¶
func (s *TokenBuffer) Peek() (tok Token, pos Pos, lit string)
Peek reads the next token and then unscans.
func (*TokenBuffer) PeekRune ¶
func (s *TokenBuffer) PeekRune() rune
PeekRune returns the next rune from the scanner.
func (*TokenBuffer) Scan ¶
func (s *TokenBuffer) Scan() (tok Token, pos Pos, lit string)
Scan reads the next token from the scanner.
func (*TokenBuffer) ScanRegex ¶
func (s *TokenBuffer) ScanRegex() (tok Token, pos Pos, lit string)
ScanRegex reads a regex token from the scanner.
func (*TokenBuffer) Unscan ¶
func (s *TokenBuffer) Unscan()
Unscan pushes the previously token back onto the buffer.