45

When writing a parser in a parser combinator library like Haskell's Parsec, you usually have 2 choices:

  • Write a lexer to split your String input into tokens, then perform parsing on [Token]
  • Directly write parser combinators on String

The first method often seems to make sense given that many parsing inputs can be understood as tokens separated by whitespace.

In other places, I have seen people recommend against tokenizing (or scanning or lexing, how some call it), with simplicity being quoted as the main reason.

What are general trade-offs between lexing and not doing it?

nh2
  • 22,055
  • 11
  • 64
  • 113
  • 3
    In my experience it depends on the language you are parsing. Certain languages are a real pain to parse without first lexing. – Pubby Mar 05 '13 at 05:21
  • 2
    @Pubby, no, there are no such languages. Any language can be parsed perfectly without a lexer. – SK-logic Mar 05 '13 at 10:29

1 Answers1

58

The most important difference is that lexing will translate your input domain.

A nice result of this is that

  • You do not have to think about whitespace anymore. In a direct (non-lexing) parser, you have to sprinkle space parsers in all places where whitespace is allowed to be, which is easy to forget and it clutters your code if whitespace must separate all your tokens anyway.

  • You can think about your input in a piece-by-piece manner, which is easy for humans.

However, if you do perform lexing, you get the problems that

  • You cannot use common parsers on String anymore - e.g. for parsing a number with a library Function parseFloat :: Parsec String s Float (that operates on a String input stream), you have to do something like takeNextToken :: TokenParser String and execute the parseFloat parser on it, inspecting the parse result (usually Either ErrorMessage a). This is messy to write and limits composability.

  • You have adjust all error messages. If your parser on tokens fails at the 20th token, where in the input string is that? You'll have to manually map error locations back to the input string, which is tedious (in Parsec this means to adjust all SourcePos values).

  • Error reporting is generally worse. Running string "hello" *> space *> float on wrong input like "hello4" will tell you precisely that there is expected whitespace missing after the hello, while a lexer will just claim to have found an "invalid token".

  • Many things that one would expect to be atomic units and to be separated by a lexer are actually pretty "too hard" for a lexer to identify. Take for example String literals - suddenly "hello world" are not two tokens "hello and world" anymore (but only, of course, if quotes are not escaped, like \") - while this is very natural for a parser, it means complicated rules and special cases for a lexer.

  • You cannot re-use parsers on tokens as nicely. If you define how to parse a double out of a String, export it and the rest of the world can use it; they cannot run your (specialized) tokenizer first.

  • You are stuck with it. When you are developing the language to parse, using a lexer might lead you into making early decisions, fixing things that you might want to change afterwards. For example, imagine you defined a language that contains some Float token. At some point, you want to introduce negative literals (-3.4 and - 3.4) - this might not be possible due to the lexer interpreting whitespace as token separator. Using a parser-only approach, you can stay more flexible, making changes to your language easier. This is not really surprising since a parser is a more complex tool that inherently encodes rules.

To summarize, I would recommend writing lexer-free parsers for most cases.

In the end, a lexer is just a "dumbed-down"* parser - if you need a parser anyway, combine them into one.


* From computing theory, we know that all regular languages are also context-free languages; lexers are usually regular, parsers context-free or even context-sensitve (monadic parsers like Parsec can express context-sensitiveness).

dave4420
  • 44,728
  • 6
  • 108
  • 146
nh2
  • 22,055
  • 11
  • 64
  • 113
  • 6
    What about performance? I guess if you're using Parsec anyhow performance isn't paramount, but it's still a possible consideration. – Tikhon Jelvis Mar 05 '13 at 05:26
  • 2
    Another potential issue with a dedicated lexer is that you won't be able to implement extensible parsers (with different sets of tokens) any more. – SK-logic Mar 05 '13 at 10:28
  • 2
    Nice answer, but I have to take issue with the "atomic units are hard in a lexer" example. Now I'm certainly not an expert in the theory, but I believe that delimited strings can be parsed pretty easily with a regular grammar. i.e. `/^"([^\\"]|\\")*"/` is a real regular expression (in the formal sense -- I think) that even deals with escaping. The point is well taken, though. – Matt Fenwick Mar 05 '13 at 19:20
  • 2
    I have to agree, performance should be mentioned here. I've converted from the all-parser style to parser+lexer (both in Parsec) then to parser+alex-generated-lexer, and every step increased performance noticeably. – ScottWest Mar 14 '13 at 08:40
  • 2
    Another nice result of using a scanner with regular expressions is that those expressions can be automatically left factored during the NFA to DFA conversion. While this can be done by hand in Parsec, in practice everyone resorts to backtracking instead, which is less efficient in both time and space. – John F. Miller Apr 22 '13 at 18:01