You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ANLTR uses .token files to define tokens for a parser grammar, not the lexer grammar. Because of that it is possible there's no lexer grammar, but only the .tokens file (e.g. manually created).
Internal generation does not generate a .tokens file in such a case (without a lexer grammar) and hence shows all imported token types as invalid/undefined.
Interpreter data loading needs to take care of this situation by constructing a stripped down interpreter data structure, whose token types are loaded from a pre-generated .tokens file, instead of the usual .interp file.
The text was updated successfully, but these errors were encountered:
Hey, stumbled upon this. My case precisely. Re-built a lexer in Rust by hand and want to integrate with a very sophisticated Antlr parser, ~300 token types.
A simple workaround almost works - in addition to generating a tokens file, generate a dummy lexer grammar with tokens {} option. But with no rules antlr4 is not entirely happy "error(99): test_lex.g4::: grammar test_lex has no rules".
Again, this may all be ignored, since there is no real need to run antlr4 on the dummy lexer grammar. Your plugin doesn't mind the error in the lexer grammar and parser grammar related features start to work (e.g. ATN graph is generated), as well as errors of undefined tokens go away.
Hack-y of course.
Anyways, absolutely great work on all the antlr4 related stuff. Besides being a heavy user of your vscode plugin, we've recently started adopting antlr-c3 and your antlr4ng runtime for auto-completion of a custom editor for my match pattern language. Thanks a lot for all of this work!
ANLTR uses
.token
files to define tokens for a parser grammar, not the lexer grammar. Because of that it is possible there's no lexer grammar, but only the .tokens file (e.g. manually created).Internal generation does not generate a
.tokens
file in such a case (without a lexer grammar) and hence shows all imported token types as invalid/undefined.Interpreter data loading needs to take care of this situation by constructing a stripped down interpreter data structure, whose token types are loaded from a pre-generated
.tokens
file, instead of the usual.interp
file.The text was updated successfully, but these errors were encountered: