TODO: remove this in next major version this IOrAlt
is enough
Type of End Of File Token.
The default grammar resolver errror message provider used by Chevrotain. this can be used as the basis for custom error providers when using Chevrotain's custom APIs.
The default grammar validations errror message provider used by Chevrotain. this can be used as the basis for custom error providers when using Chevrotain's custom APIs.
This is the default logic Chevrotain uses to construct lexing error messages. It can be used as a reference or as a starting point customize a lexer's error messages.
This is the default logic Chevrotain uses to construct error messages. It can be used as a reference or as a starting point customize a parser's error messages.
Convenience used to express an empty alternative in an OR (alternation). can be used to more clearly describe the intent in a case of empty alternation.
For example:
without using EMPTY_ALT:
this.OR([
{ALT: () => {
this.CONSUME1(OneTok)
return "1"
}},
{ALT: () => {
this.CONSUME1(TwoTok)
return "2"
}},
// implicitly empty because there are no invoked grammar
// rules (OR/MANY/CONSUME...) inside this alternative.
{ALT: () => {
return "666"
}},
])
using EMPTY_ALT:
this.OR([
{ALT: () => {
this.CONSUME1(OneTok)
return "1"
}},
{ALT: () => {
this.CONSUME1(TwoTok)
return "2"
}},
// explicitly empty, clearer intent
{ALT: EMPTY_ALT("666")},
])
A utility for assigning unique occurence indices to a grammar AST (rules parameter). This can be useful when using Chevrotain to create custom APIs.
Will generate an html source code (text). This html text will render syntax diagrams for the provided grammar.
Creates a new TokenType which can then be used to define a Lexer and Parser
Utility to create Chevrotain IToken "instances" Note that Chevrotain tokens are not real TokenTypes instances and thus the instanceOf cannot be used with them.
Generate A Parser factory from a set of Rules.
This variant will Create a factory function that once invoked with a IParserConfig will return a Parser Object.
Note that this happens using the Function constructor (a type of "eval") so it will not work in environments where content security policy is enabled, such as certain websites, Chrome extensions ect...
This means this function is best used for development flows to reduce the feedback loops or for productive flows targeting node.js only.
For productive flows targeting a browser runtime see generateParserModule.
See detailed docs for Custom APIs.
Generate A Parser's text from a set of Rules.
This variant will generate the string literal for a UMD module https://github.com/umdjs/umd That exports a Parser Constructor.
Note that the constructor exposed by the generated module must receive the TokenVocabulary as the first argument, the IParser config can be passed as the second argument.
See detailed docs for Custom APIs.
A utility to detect if an Error is a Chevrotain Parser's runtime exception.
A utility to resolve a grammar AST (rules parameter). "Resolving" means assigning the appropiate value for all NonTerminal.referencedRule properties in the grammar AST.
Serialize a Grammar to a JSON Object.
This can be useful for scenarios requiring exporting the grammar structure for example drawing syntax diagrams.
Like serializeGrammar but for a single GAST Production instead of a set of Rules.
Returns a human readable label for a TokenType if such exists, otherwise will return the TokenType's name.
Labels are useful in improving the readability of error messages and syntax diagrams. To define labels provide the label property in the createToken config parameter.
A Utility method to check if a token is of the type of the argument Token class. This utility is needed because Chevrotain tokens support "categories" which means A TokenType may have multiple categories.
This means a simple comparison using the IToken.tokenType property may not suffice. For example:
import { createToken, tokenMatcher, Lexer } from "chevrotain"
// An "abstract" Token used only for categorization purposes.
const NumberTokType = createToken({ name: "NumberTokType", pattern: Lexer.NA })
const IntegerTokType = createToken({
name: "IntegerTokType",
pattern: /\d+/,
// Integer "Is A" Number
categories: [NumberTokType]
})
const DecimalTokType = createToken({
name: "DecimalTokType",
pattern: /\d+\.\d+/,
// Double "Is A" Number
categories: [NumberTokType]
})
// Will always be false as the tokenType property can only
// be Integer or Double Token Types as the Number TokenType is "abstract".
if (myToken.tokenType === NumberTokType) { /* ... *\/ }
// Will be true when myToken is of Type Integer or Double.
// Because the hierarchy defined by the categories is taken into account.
if (tokenMatcher(myToken, NumberTokType) { /* ... *\/ }
true iff the token matches the TokenType.
A utility to validate a grammar AST (rules parameter). For example: left recursion detection, ambiguity detection, ...
The maximum lookahead used in the grammar. This number is needed to perform ambiguity detection.
The Token Types used by the grammar.
Generated using TypeDoc
API #1 Custom Token Patterns.