astparser

package
v2.0.0-rc.231 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 20, 2025 License: MIT Imports: 11 Imported by: 12

Documentation

Overview

Package astparser is used to turn raw GraphQL documents into an AST.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func ParseGraphqlDocumentBytes

func ParseGraphqlDocumentBytes(input []byte) (ast.Document, operationreport.Report)

ParseGraphqlDocumentBytes takes a raw GraphQL document in byte slice format and parses it into an AST. This function creates a new parser as well as a new AST for every call. Therefore you shouldn't use this function in a hot path. Instead create a parser as well as AST objects and re-use them.

func ParseGraphqlDocumentString

func ParseGraphqlDocumentString(input string) (ast.Document, operationreport.Report)

ParseGraphqlDocumentString takes a raw GraphQL document in string format and parses it into an AST. This function creates a new parser as well as a new AST for every call. Therefore you shouldn't use this function in a hot path. Instead create a parser as well as AST objects and re-use them.

Types

type ErrDepthLimitExceeded

type ErrDepthLimitExceeded struct {
	// contains filtered or unexported fields
}

ErrDepthLimitExceeded is returned when the parser encounters nesting depth that exceeds the configured limit during tokenization. This error helps prevent stack overflow and DoS attacks from maliciously deep GraphQL documents.

func (ErrDepthLimitExceeded) Error

func (e ErrDepthLimitExceeded) Error() string

type ErrFieldsLimitExceeded

type ErrFieldsLimitExceeded struct {
	// contains filtered or unexported fields
}

ErrFieldsLimitExceeded is returned when the parser encounters a number of fields that exceeds the configured limit during tokenization. This error helps prevent

func (ErrFieldsLimitExceeded) Error

func (e ErrFieldsLimitExceeded) Error() string

type ErrUnexpectedIdentKey

type ErrUnexpectedIdentKey struct {
	// contains filtered or unexported fields
}

ErrUnexpectedIdentKey is a custom error object to properly render an unexpected ident key error

func (ErrUnexpectedIdentKey) Error

func (e ErrUnexpectedIdentKey) Error() string

type ErrUnexpectedToken

type ErrUnexpectedToken struct {
	// contains filtered or unexported fields
}

ErrUnexpectedToken is a custom error object containing all necessary information to properly render an unexpected token error

func (ErrUnexpectedToken) Error

func (e ErrUnexpectedToken) Error() string

type Parser

type Parser struct {
	// contains filtered or unexported fields
}

Parser takes a raw input and turns it into an AST use NewParser() to create a parser Don't create new parsers in the hot path, re-use them.

func NewParser

func NewParser() *Parser

NewParser returns a new parser with all values properly initialized

func (*Parser) Parse

func (p *Parser) Parse(document *ast.Document, report *operationreport.Report)

Parse parses all input in a Document.Input into the Document

func (*Parser) ParseType

func (p *Parser) ParseType() (ref int)

func (*Parser) ParseValue

func (p *Parser) ParseValue() (value ast.Value)

func (*Parser) ParseWithLimits

func (p *Parser) ParseWithLimits(limits TokenizerLimits, document *ast.Document, report *operationreport.Report) (TokenizerStats, error)

ParseWithLimits parses all input in a Document.Input into the Document with limits on the number of fields and depth

func (*Parser) PrepareImport

func (p *Parser) PrepareImport(document *ast.Document, report *operationreport.Report)

PrepareImport prepares the Parser for importing new Nodes into an AST without directly parsing the content

type Tokenizer

type Tokenizer struct {
	// contains filtered or unexported fields
}

Tokenizer takes a raw input and turns it into set of tokens

func NewTokenizer

func NewTokenizer() *Tokenizer

NewTokenizer returns a new tokenizer

func (*Tokenizer) Peek

func (t *Tokenizer) Peek() token.Token

Peek - returns token next to currentToken if hasNextToken otherwise returns keyword.EOF

func (*Tokenizer) Read

func (t *Tokenizer) Read() token.Token

Read - increments currentToken index and return token if hasNextToken otherwise returns keyword.EOF

func (*Tokenizer) Tokenize

func (t *Tokenizer) Tokenize(input *ast.Input)

func (*Tokenizer) TokenizeWithLimits

func (t *Tokenizer) TokenizeWithLimits(limits TokenizerLimits, input *ast.Input) (TokenizerStats, error)

type TokenizerLimits

type TokenizerLimits struct {
	MaxDepth  int
	MaxFields int
}

type TokenizerStats

type TokenizerStats struct {
	TotalDepth  int
	TotalFields int
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL