Documentation
¶
Overview ¶
Package lexer provides lexical analysis for Cypher query strings. It tokenizes input strings into a stream of tokens that can be consumed by the parser.
The lexer supports:
- Identifiers and keywords
- Numbers (integers and floats)
- Strings (single and double quoted)
- Operators (+, -, *, /, %, ^, =, !=, <, <=, >, >=)
- Delimiters (parentheses, braces, brackets, commas, etc.)
- Comments (single-line // and multi-line /* */)
Basic Usage:
l := lexer.New("MATCH (n:Person) RETURN n.name")
tokens, err := l.Tokenize()
if err != nil {
log.Fatal(err)
}
for _, tok := range tokens {
fmt.Printf("%s: %s\n", tok.Type, tok.Value)
}
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
Types ¶
type Lexer ¶
type Lexer struct {
// contains filtered or unexported fields
}
Lexer tokenizes Cypher query strings into a stream of tokens.
func New ¶
New creates a new Lexer for the given input string.
Parameters:
- input: The Cypher query string to tokenize
Returns a new Lexer instance.
Example:
l := lexer.New("MATCH (n:Person) RETURN n.name")
tokens, err := l.Tokenize()
func (*Lexer) Tokenize ¶
Tokenize processes the entire input and returns all tokens. It returns an error if any lexical errors are encountered.
Returns the slice of tokens and any error encountered.
Example:
l := lexer.New("MATCH (n:Person) RETURN n.name")
tokens, err := l.Tokenize()
if err != nil {
log.Fatal(err)
}
type LexerError ¶
type LexerError struct {
Line int // Line number where the error occurred
Column int // Column number where the error occurred
Message string // Error message
}
LexerError represents a lexical error with position information.
type LexerErrors ¶
type LexerErrors struct {
Errors []error // Slice of errors
}
LexerErrors aggregates multiple lexical errors.
func (*LexerErrors) Error ¶
func (e *LexerErrors) Error() string
Error returns a string containing all error messages.
type Token ¶
type Token struct {
Type TokenType // The type of the token
Value string // The string value of the token
Line int // Line number where the token appears
Column int // Column number where the token appears
Position int // Byte position in the input
}
Token represents a lexical token with its type, value, and position.
type TokenType ¶
type TokenType int
TokenType represents the type of a lexical token.
const ( TokenEOF TokenType = iota TokenError TokenInteger TokenFloat TokenString TokenBoolean TokenNull TokenIdentifier TokenPlus TokenStar TokenSlash TokenPercent TokenCaret TokenEq TokenNeq TokenLt TokenLe TokenGt TokenGe TokenAnd TokenOr TokenNot TokenXor TokenLParen TokenRParen TokenLBrace TokenRBrace TokenLBracket TokenRBracket TokenColon TokenComma TokenDot TokenSemicolon TokenPipe TokenArrowRight TokenArrowLeft TokenDash TokenPlusEq TokenRange TokenDollar )
Token type constants.