lexer

package
v0.9.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 17, 2025 License: MIT Imports: 1 Imported by: 0

Documentation

Overview

Package lexer implements the lexical analyzer for the Monke programming language.

The lexer is responsible for breaking down the source code into tokens, which are the smallest units of meaning in the language. It reads the input character by character and produces a stream of tokens that can be processed by the parser.

Key features:

  • Tokenization of all language elements (keywords, identifiers, literals, operators, etc.)
  • Handling of whitespace and comments
  • Error detection for illegal characters
  • Support for various token types defined in the token package
  • Optimized for performance with minimal allocations

The main entry point is the New function, which creates a new Lexer instance, and the NextToken method, which returns the next token from the input.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Lexer

type Lexer struct {
	// contains filtered or unexported fields
}

Lexer represents the lexer for the Monke programming language.

func New

func New(input string) *Lexer

New creates a new Lexer with the given input string. It initializes the lexer, reads the first character, and sets up the token buffer.

func (*Lexer) NextToken

func (l *Lexer) NextToken() token.Token

NextToken reads the next token from the input. It skips whitespace, identifies the token type based on the current character, and returns a token with the appropriate type and literal value.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL