As much as we might like to think that data types are fixed in place, data types are actually just as mutable as the values they categorize.
Consider the following:
Given: A type exists, by the name of `Int`.Given: Another type exists, by the name of `List`.
How, then, could we describe a type named List<int>
? Is it a derivative of Int
? Is it a derivative of List
? Is it its own separate type? Is List
even a type at all?
Similarly, consider this:
Given: A type named `Int`.Given: The type `Int, Int` is a tuple of `Int`.
So, what then, is Int, Int, Int
? Is it a tuple of three Int
s? Is it a tuple of one Int
and one Int, Int
? Is it a tuple of one Int, Int
and one Int
?
The fact of the matter is that types, just like expressions, can be, and frequently are combined into new types. This, when expressed as text, requires parsers to establish rules about operators, precedence, and the order in which type names are listed in order to represent them in memory.
So why not parse them the same way as expressions?
Pratt parsing is a relatively simple way to parse mathematical, or otherwise arithmetic expressions.
Bob Nystrom’s article on Pratt Parsing (linked above) is probably the best and easiest article on the topic. I suggest reading it!
The Pratt Parsing technique can be applied to types in the same way as it can be to expressions.
More on compilers in following articles…
Follow me on Twitter: https://twitter.com/thosakwe