IntelliJ Platform Plugin SDK Help

Implementing Lexer

The lexer, or lexical analyzer, defines how a file's contents are broken into tokens. The lexer serves as a foundation for nearly all features of custom language plugins, from basic syntax highlighting to advanced code analysis features.

The API for the lexer is defined by the Lexer interface.

The IDE invokes the lexer in three main contexts, and the plugin can provide different lexer implementations if needed:

The lexer used for syntax highlighting can be invoked incrementally to process only the file's changed part. In contrast, lexers used in other contexts are always called to process an entire file or a complete language construction embedded in a different language file.

Lexer State

A lexer that can be used incrementally may need to return its state, which means the context corresponding to each position in a file. For example, a Java lexer could have separate states for top-level context, comment context, and string literal context.

An essential requirement for a syntax highlighting lexer is that its state must be represented by a single integer number returned from Lexer.getState(). That state will be passed to the Lexer.start() method, along with the start offset of the fragment to process, when lexing is resumed from the middle of a file. Lexers used in other contexts can always return 0 from getState().

Lexer Implementation

The easiest way to create a lexer for a custom language plugin is to use JFlex.

Classes FlexLexer and FlexAdapter adapt JFlex lexers to the IntelliJ Platform Lexer API. A patched version of JFlex can be used with the lexer skeleton file idea-flex.skeleton located in the IntelliJ IDEA Community Edition source to create lexers compatible with FlexAdapter. The patched version of JFlex provides a new command-line option --charat that changes the JFlex generated code to work with the IntelliJ Platform skeleton. Enabling --charat option passes the source data for lexing as a java.lang.CharSequence and not as an array of characters.

For developing lexers using JFlex, the Grammar-Kit plugin can be useful. It provides syntax highlighting and other useful features for editing JFlex files (*.flex).

Examples:

Token Types

Types of tokens for lexers are defined by instances of IElementType. Many token types common for all languages are defined in the TokenType interface. Custom language plugins should reuse these token types wherever applicable. For all other token types, the plugin needs to create new IElementType instances and associate with the language in which the token type is used. The same IElementType instance should be returned every time a particular token type is encountered by the lexer.

Example: Token types for Properties language plugin

Groups of related types (e.g., keywords) can be defined using TokenSet. All TokenSet for a language should be grouped in a dedicated $Language$TokenSets class for re-use.

Example: GroovyTokenSets

Embedded Language

An important feature that can be implemented at the lexer level is mixing languages within a file, such as embedding fragments of Java code in some template language. Suppose a language supports embedding its fragments in another language. In that case, it needs to define the chameleon token types for different types of fragments that can be embedded, and these token types need to implement the ILazyParseableElementType interface. The enclosing language's lexer needs to return the entire fragment of the embedded language as a single chameleon token, of the type defined by the embedded language. To parse the contents of the chameleon token, the IDE will call the parser of the embedded language through a call to ILazyParseableElementType.parseContents().

Last modified: 29 九月 2022