pypuppetdbquery.lexer module

exception pypuppetdbquery.lexer.LexException(message, position)[source]

Bases: Exception

Raised for errors encountered during lexing.

Such errors may include unknown tokens or unexpected EOF, for example. The position of the lexer when the error was encountered (the index into the input string) is stored in the position attribute.

class pypuppetdbquery.lexer.Lexer(**kwargs)[source]

Bases: object

Lexer for the PuppetDBQuery language.

This class uses ply.lex.lex() to implement the lexer (or tokenizer). It is used by pypuppetdbquery.parser.Parser in order to process queries.

The arguments to the constructor are passed directly to ply.lex.lex().

Note

Many of the docstrings in this class are used by ply.lex to build the lexer. These strings are not particularly useful for generating documentation from, so the built documentation for this class may not be very useful.

input(s)[source]

Reset and supply input to the lexer.

Tokens then need to be obtained using token() or the iterator interface provided by this class.

next()[source]

Implementation of iterator.next().

Return the next item from the container. If there are no further items, raise the StopIteration exception.

t_ASTERISK = '\\*'
t_AT = '@'
t_BOOLEAN(t)[source]

true|false

t_DOT = '\\.'
t_EQUALS = '='
t_EXPORTED = '@@'
t_FLOAT(t)[source]

-?d+.d+

t_GREATERTHAN = '>'
t_GREATERTHANEQ = '>='
t_HASH = '[#]'
t_LBRACE = '{'
t_LBRACK = '\\['
t_LESSTHAN = '<'
t_LESSTHANEQ = '<='
t_LPAREN = '\\('
t_MATCH = '~'
t_NOTEQUALS = '!='
t_NOTMATCH = '!~'
t_NUMBER(t)[source]

-?d+

t_RBRACE = '}'
t_RBRACK = '\\]'
t_RPAREN = '\\)'
t_STRING_bareword(t)[source]

[-w_:]+

t_STRING_double_quoted(t)[source]

“(\.|[^\”])*”

t_STRING_single_quoted(t)[source]

‘(\.|[^\’])*’

t_error(t)[source]
t_ignore = ' \t\n\r\x0c\x0b'
t_keyword(t)[source]

not|and|or

token()[source]

Obtain one token from the input.

tokens = ('LPAREN', 'RPAREN', 'LBRACK', 'RBRACK', 'LBRACE', 'RBRACE', 'EQUALS', 'NOTEQUALS', 'MATCH', 'NOTMATCH', 'LESSTHANEQ', 'LESSTHAN', 'GREATERTHANEQ', 'GREATERTHAN', 'ASTERISK', 'HASH', 'DOT', 'NOT', 'AND', 'OR', 'BOOLEAN', 'NUMBER', 'STRING', 'FLOAT', 'EXPORTED', 'AT')

List of token names handled by the lexer.