Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
Types ¶
type Lexer ¶
type Lexer struct {
// contains filtered or unexported fields
}
Lexer splits a UTF8-encoded input into tokens.
func (*Lexer) Ident ¶
Ident will return the identifier if the current token is tIdent. Use Token() first.
func (*Lexer) Next ¶
func (l *Lexer) Next()
Next reads the input for the next token. After that Token() will return the token that was read.
If Lexer input is at EOF then the next Token() call will return tEOF. If reading failed (e.g. if the input is not valid UTF8) the next Token() call will return tError.
func (*Lexer) TokenStartPos ¶
TokenStartPos will return the starting position of the last read token after Next() call.
func (*Lexer) Uint64Number ¶
type Parser ¶
type Parser struct {
// contains filtered or unexported fields
}
Parser parses a STEF IDL input into Schema.
This is a recursive descent parser with separate lexer for tokenization.
func NewParser ¶
NewParser creates a new parser with specified lexer as the input. fileName is used for composing error messages (if any).