shlex

package
v0.1.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 7, 2022 License: AGPL-3.0 Imports: 5 Imported by: 0

Documentation

Overview

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Index

Constants

This section is empty.

Variables

View Source
var (
	ErrNoClosing = errors.New("No closing quotation")
	ErrNoEscaped = errors.New("No escaped character")
)

Functions

func Split

func Split(s string, posix bool) ([]string, error)

Split splits a string according to posix or non-posix rules.

Types

type DefaultTokenizer

type DefaultTokenizer struct{}

DefaultTokenizer implements a simple tokenizer like Unix shell.

func (*DefaultTokenizer) IsEscape

func (t *DefaultTokenizer) IsEscape(r rune) bool

func (*DefaultTokenizer) IsEscapedQuote

func (t *DefaultTokenizer) IsEscapedQuote(r rune) bool

func (*DefaultTokenizer) IsQuote

func (t *DefaultTokenizer) IsQuote(r rune) bool

func (*DefaultTokenizer) IsWhitespace

func (t *DefaultTokenizer) IsWhitespace(r rune) bool

func (*DefaultTokenizer) IsWord

func (t *DefaultTokenizer) IsWord(r rune) bool

type Lexer

type Lexer struct {
	// contains filtered or unexported fields
}

Lexer represents a lexical analyzer.

func NewGitLexer

func NewGitLexer(r io.Reader) *Lexer

NewGitLexer creates a new Lexer reading from io.Reader. This Lexer has a DefaultTokenizer according to posix and whitespacesplit rules.

func NewGitLexerString

func NewGitLexerString(s string) *Lexer

NewGitLexerString creates a new Lexer reading from a string. This Lexer has a DefaultTokenizer according to posix and whitespacesplit rules.

func NewLexer

func NewLexer(r io.Reader, posix, whitespacesplit bool) *Lexer

NewLexer creates a new Lexer reading from io.Reader. This Lexer has a DefaultTokenizer according to posix and whitespacesplit rules.

func NewLexerString

func NewLexerString(s string, posix, whitespacesplit bool) *Lexer

NewLexerString creates a new Lexer reading from a string. This Lexer has a DefaultTokenizer according to posix and whitespacesplit rules.

func (*Lexer) SetTokenizer

func (l *Lexer) SetTokenizer(t Tokenizer)

SetTokenizer sets a Tokenizer.

func (*Lexer) Split

func (l *Lexer) Split() ([]string, error)

type Tokenizer

type Tokenizer interface {
	IsWord(rune) bool
	IsWhitespace(rune) bool
	IsQuote(rune) bool
	IsEscape(rune) bool
	IsEscapedQuote(rune) bool
}

Tokenizer is the interface that classifies a token according to words, whitespaces, quotations, escapes and escaped quotations.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL