Lexical Analysis - Lexer - Lexical (Analyzer|Tokenizer|Scanner)

1 - About

Lexer are also known as:

  • Lexical analyzer
  • Lexical Tokenizer
  • Lexical Scanner

A lexer defines how the contents of a file is broken into tokens.

A lexer reads an input character or byte stream (i.e. characters, binary data, etc.), divides it into tokens using:

This process of transforming an input text into discrete components called tokens is known as:

  • lexical analysis,
  • lexing
  • tokenizing
  • or simply scanning.

A Lexer is a stateful stream generator (ie the position in the source file is saved). Every time it is advanced, it returns the next token in the Source. Normally, the final Token emitted by the lexer is a EOF and it will repeatedly return the same EOF token whenever called.

Lexical analysis is the first step of a compiler. In the second step, the tokens can then then processed by a parser.

Lexers are generally quite simple and does nothing with the tokens. Most of the complexity is deferred to the following steps:

For example, a typical lexical analyzer recognizes parentheses as tokens, but does nothing to ensure that each “(” is matched with a “)”. This syntax analysis is left to the parser.

Lexers can be generated by automated tools called compiler-compiler.

3 - Example

Consider the following expression:

sum = 3 + 2;

Tokenized in the following symbol table:

Token
Lexeme Token type
sum Identifier
= Assignment operator
3 Integer literal
+ Addition operator
2 Integer literal
; End of statement

4 - Steps

Tokenizing is generally done in a single pass.

  1. Groups the characters into meaningful sequences called lexemes that adheres to the grammar. Tokens are frequently in the rule defined by regular expressions.
  2. Categorize each lexeme (sequence of characters) into tokens (symbols of the vocabulary of the language). If the lexer finds an invalid token, it will report an error.

5 - Tools

6 - Documentation / Reference

code/compiler/lexer.txt · Last modified: 2018/07/07 11:56 by gerardnico