Establishing the terminology
What do I mean by that gorgeous word, linguistic abstraction, to which I have referred two times in the introduction part? Basically, it is any language “feature”: a variable, an interface, a function. And guess what a metalinguistic abstraction means? Recall that the prefix “meta” simply means that the thing is used to deal with something of a similar nature: a metaprogram manipulates other programs, metadata describe other data, a metagrammar specifies other grammars, and so on. From this point we conclude that a metalinguistic abstraction is a linguistic abstraction used to deal with other linguistic abstractions. I am only aware of two types of them: code generation (or macros, or metaprogramming, whichever term you prefer 3) and a type system.
Why macros are “meta”? Well, macros can do pretty much anything with the code: they can accept it, they can transform it, they can emit the code… remember how Rust allows you to execute arbitrary instructions inside procedural macros, remember the incremental TT muncher pattern, or how you can brutally imitate type polymorphism through the C preprocessor (Patrick Pelissier, n.d.; Tyge Løvset, n.d.; Leonardo Vencovsky, n.d.). Whilst macros can manipulate other code, other code cannot manipulate macros 4 – this is the reason why macros are “meta”.
Why types are “meta”? What you usually accomplish with metaprogramming, you can leverage to enough expressive types 5. Returning back to our poor C preprocessor, in Rust you can simply use generics instead of instantiating type-specific code by hand. Or you can go insane and play with type lists instead of (ab)using compile-time macro collections of Boost/Preprocessor. So types are capable of metaprogramming to some extent (Alexis King, n.d.; Wikipedia, n.d.b; Haskell Wiki, n.d.; Will Crichton, n.d.b, n.d.a; Shea Leffler, n.d.b, n.d.a; Paho Lurie-Gregg and Andre Bogus, n.d.; Szymon Mikulicz, n.d.; Edwin Brady, n.d.) – this is why they are “meta”.
The presence of either of these tools in our language makes us able to extend it with custom concepts in an embedded manner, i.e., without intervention of third-party utilities. However, today I will discuss only the syntax business – macros.
Having the terminology established, let us dive into the pragmatics!
Syntactical consistency
datatype(
BinaryTree,
(Leaf, int),
(Node, BinaryTree *, int, BinaryTree *)
);
int sum(const BinaryTree *tree) {
match(*tree) {
of(Leaf, x) return *x;
of(Node, lhs, x, rhs) return sum(*lhs) + *x + sum(*rhs);
}
return -1;
}
What is this? This is how good old C looks like with Datatype99, a library that provides us with the full support for algebraic data types. Please pay your attention to the pattern matching syntax. Does it feel alright? Does it feel natural, like it has always been there? Absolutely. Now gaze upon this imaginary piece of code:
int sum(const BinaryTree *tree) {
match(
*tree,
{
of(Leaf, (x), return *x),
of(Node, (lhs, x, rhs), return sum(*lhs) + *x + sum(*rhs)),
});
return -1;
}
I ask you the same question: does it feel alright? Does it feel natural, like it has always been there? Absolutely NOT. While it might look fine in another language, it looks utterly weird in C. But actually, what is the essential difference between these two code snippets, the difference that makes the former look properly, well-formedly, whereas the latter one look like a disformed creature? The syntactical consistency.
By syntactical consistency, I understand the degree by which the
grammar of a particular meta-abstraction (e.g., the macros
match
& of
) conforms to/coincides with the
grammar of a host language. Recall that in C-like languages, we can
often see constructions of the form
<keyword> (...) <compound-statement>
6:
for (int i = 0; i < 10; i++) { printf("%d\n", i); }
while (i < 10) { printf("%d\n", i); i++; }
if (5 < 10) { printf("true"); }
- and more…
But we do not see
for ((int i = 0; i < 10; i++), { printf("%d\n", i); });
while (i < 10, { printf("%d\n", i); i++; });
if (5 < 10, { printf("true"); });
- etc.
Got the pattern? The proper syntax of match
coincides with the syntax of the host language, C in our case,
whereas the latter one does not. Another example:
#define State_INTERFACE \
iFn(int, get, void *self); \
iFn(void, set, void *self, int x);
interface(State);
typedef struct {
int x;
} Num;
int Num_State_get(void *self) {
return ((Num *)self)->x;
}
void Num_State_set(void *self, int x) {
((Num *)self)->x = x;
}
impl(State, Num);
This time you see pure ISO C99 augmented with Interface99, a
library that provides the software interface pattern. Notice that the
function definition syntax remains the same (albeit iFn
is
somewhat less common), and impl
just deduces these
definitions (Num_State_get
&
Num_State_set
) from the context. Now consider this:
impl(
(State) for (Num),
(int)(get)(void *self)({
return ((Num *)self)->x;
}),
(void)(set)(void *self, int x)({
((Num *)self)->x = x;
}),
);
This macro impl
does not follow the syntax of C. This is
why it looks so odd.
Both alternatives have the same semantics and the same functionality. The difference is only in the syntax part. Always try to mimic to the syntax of your host language, and you should be fine. Do not try to alter the common syntactical forms like a function/variable definition. This is what I call syntactical consistency 7.
The bliss of Rust: Syntax-aware macros
While C/C++ macros work with preprocessing tokens (ISO C, n.d.),
Rusty macros work with concrete syntax
trees, and sometimes with language tokens. This is cool because they
let you imitate the syntax of Rust: you can parse function
definitions, structures, enumerations, or pretty much anything! Consider
tokio::select!
:
tokio::select! {
v1 = (&mut rx1), if a.is_none() => a = Some(v1.unwrap()),
v2 = (&mut rx2), if b.is_none() => b = Some(v2.unwrap()),
}
See? The <something> => <something>
syntax is much like native Rusty pattern matching. Because of it, this
syntax looks very familiar, and even if you are not yet acquainted with
the macro, you can already roughly understand what is happening.
Another example is derive macros of serde-json:
Here, Serialize
& Deserialize
are
indeed macros. They parse the contents of struct Person
and
derive the corresponding traits for it. You do not need to adjust the
definition of the structure because the syntax is shared, and this is
awesome. If I was designing a new language of mine and there was a need
in macros, I would definitely endeavour to make them work nicely with
the ordinary syntax.
The bliss of Lisp: Why S-expressions are so hot
“Are you quite sure that all those bells and whistles, all those wonderful facilities of your so-called powerful programming languages, belong to the solution set rather than the problem set?”
The Rust’s syntax is not simple. As quite often happens in software engineering, a programming language grammar is a trade-off:
Complicated syntax allows the code to be more concise, however, it drastically reduces the amount of people able to produce reliable macros.
Simple syntax can sometimes be a bit wordy or superfluous, but enables ordinary developers to write reliable macros. With simple syntax, the chances to mess up with syntax pecularities are much lesser.
Citing David Tolnay:
“The macro author is responsible for the placement of every single angle bracket, lifetime, type parameter, trait bound, and phantom data. There is a large amount of domain knowledge involved and very few people can reliably produce robust macros with this approach.”
As opposed to Rust, we have a solution in a completely different direction – s-expressions. Instead of oversophisticating the language grammar by each subsequent release, some people decide to keep the grammar always trivial. This approach has a bunch of far-reaching implications, including simplified IDE support and language analysis in general. Metaprogramming becomes more malleable too, because you only need to handle a single homogenous structure (a list); you do not need to deal with an intimidating variety of syntactic forms your host language accomodates.
To come back to our muttons, the nature of s-expressions is to facilitate syntactical consistency. Consider this: if there are only s-expressions and nothing more, you can imitate any language item with simple macros – everything will look the same. Even with so-called “powerful” Rusty macros, we cannot do this:
delegate!(self.inner) {
pub fn is_empty(&self) -> bool;
pub fn push(&mut self, value: T);
pub fn pop(&mut self) -> Option<T>;
pub fn clear(&mut self);
}
The only way is to write like this:
delegate! {
to self.inner {
pub fn is_empty(&self) -> bool;
pub fn push(&mut self, value: T);
pub fn pop(&mut self) -> Option<T>;
pub fn clear(&mut self);
}
}
Adapted from Kobzol/rust-delegate, a library for automatic method delegation in Rust.
Clearly less nifty. The match
control flow operator can
do that, why your “powerful” macros cannot? Look:
let x = Some(5);
let y = 10;
// match!(x) { ...} ? Hmm...
match x {
Some(50) => println!("Got 50"),
Some(y) => println!("Matched, y = {:?}", y),
_ => println!("Default case, x = {:?}", x),
}
Adapted from the chapter of TRPL about pattern matching.
Even if it could be fixed, this example does still greatly demonstrate the white holes of communication of the main syntax and user-defined macros in Rust: sometimes, due to its multifaceted grammar, it just does not allow us to express things naturally. One possible solution is leveraging an adaptive grammar:
“An adaptive grammar is a formal grammar that explicitly provides mechanisms within the formalism to allow its own production rules to be manipulated.”
Basically what it means is that you can specify your own syntactic
forms (like match
or if
) right inside a source
file, and a built-in parser will do the trick. Idris supports the feature called
syntax
extensions, which is, to the best of my understanding, is pretty
much like an adaptive grammar; believe or not, the
if ... then ... else
syntax is not built into the Idris
compiler, but is rather defined via the ifThenElse
function:
Which is invoked by the following syntactic rule:
Similar syntactical constructions can be defined in the same way. No need to wait for a couple of years till language designers decide to ship a new release, do it right here and right now. Yes, you will be right if you say that Rust is extensible, but the thing is that its extensibility is still very limited 8, sometimes unpleasant 9.
Extending good old C
This is all exciting and fun, but how to apply this knowledge in practice? I have an answer. Rather a long answer, full of peculiar details and techniques.
I suggest you to start with a popular article of Simon Tatham about metaprogramming custom control structures in C 10. If you are only interested in a working solution, consider Metalang99 11 with its statement chaining macros. Seeing how pattern matching works in Datatype99 (hirrolot, n.d.) can also give you some insight.
Final words
Some languages are more malleable to user extension than the others. Some employ adaptive grammars (Idris), some employ syntax-aware macros (Rust), some employ Lisp-style s-expressions. Surely, there are a lot of alternatives in the design space, each has its own benefits and downsides.
The intent of this blog post was to advocate the principle of syntactical consistency.
I encourage you to mimic the syntax of a host language when writing macros, to make your code look more eye-pleasing, less like a malevolent beast.
I encourage you to extend, not to alter.
References
“Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.” – Greenspun’s tenth rule, an amusing quote with which I do agree a lot.↩︎
I wish I had more experience with limitless Lisp-style user extension, as in Racket or Common Lisp. Maybe my post would have been more profound then.↩︎
There is also runtime reflection, though I am not sure whether it is a special kind of metaprogramming or not. Maybe JIT macros could outperform Java-style runtime reflection? Oh my god, that is insane…↩︎
Unless macros generate other macros! C/C++ macros cannot do that, while Rusty and Lispy macros can.↩︎
It might not be formally correct… if your metaprogramming system is Turing-complete, how would you leverage everything to a logically consistent type system, as of Idris? Surely, this is out of the scope of this blog post.↩︎
A compound statement is a sequence of statements put into curly braces.↩︎
C99-Lambda is yet another terrifying example of abusing the preprocessor. It attempts to alter the native function definition syntax, and therefore it looks so odd.↩︎
On the other hand, limitless extensibility tied up with a complicated syntax would make mean developers mess up with reliable macros again. What a disappointment! Returning back to s-expressions…↩︎
For example, the syntax of
match
is gradually evolving over time. Not so long time ago the core team has announced “or” patterns. With an adaptive grammar, this feature could be implemented in libraries.↩︎To the best of my knowledge, Simon Tatham was first to formulate the term statement prefix. He described precisely how to build custom control flow operators via regular
#define
macros, and make them look natural.↩︎Metalang99 is an advanced metaprogramming system for C99. It is implemented as a purely functional programming language, with partial applications, recursion, algebraic data types, cons-lists, and all the stuff.↩︎