Lambda Calculus 2 - Church Numbers

I'm reading The Structure and Interpretation of Computer Programs. It's hard - my maths is terrible in comparison to what was expected of Computer Science undergraduates at MIT in the 80s. But I'm learning some things, and one of the things that clicked today near the end of the first section of Chapter 2 was Church numerals.

Counting... without numbers

In the first post I wrote about the lambda calculus we looked at the basic syntax and a simple function that took two numbers and added them together:

λx.λy.x + y

This might all look OK until I tell you that, in the untyped lambda calculus, the only primitive data type is a function.

A function. Not a bit, a byte, a string; not a number - a function.

So we should be a little suspicious of λx.λy.x + y as this + symbol needs to be defined as a function. Fair enough - addition feels like a the sort of thing that could easily be a function.

But what would we apply to it? We need a number - like 1 or 2. But we need to make them out of functions.

Wait, what? We need to make numbers out of functions?!

And this is where things start to get weird.

What is a number anyway?

You will now be inducted into a sacred mystery that will allow you to make and understand geeky Lisp jokes on the internet. Be brave.

In a universe with no things - only functions - how would we count? Well, we'd have to do it with functions.

OK, sure - that's not really getting us anywhere - let's take 2 as a concrete example. How do I write a function that represents 2?

Simple - we just give it a name if it was JavaScript:

const two = () => {}

That's cheating! What are these 'names' of which you speak? Are they made of functions too?

The thing is, we don't just want a symbol for 2 - the numeral. What we need is a function that represents, in some way, the very essence of two-ness.

What I'm trying to get across here (without jumping to the solution immediately) is that the representation of numbers in the lambda calculus are not mere symbols; they encapsulate a certain behaviour that we associate with 'number'.

And that behaviour is that of repetition. When we say 'look at those two apples', we're expecting there to be an apple, and then another apple. In Church arithmetic a number is represented by a function that takes two values, and then applies the first value to the second value n times, where 'n' is the number being represented.

Church Numbers

So much for the theory, let's take a look at some real numbers.1 First up, the number one:

λf.λx. f(x)

We accept a variable called f, we take another one called x, and we call f with x once. We're kinda hoping that f turns out to be a function, but as this is the lambda calculus and everything is a function, we can be ~fairly~ absolutely certain it is.

In JavaScript:

f => x => f(x)

And Scheme:

(lambda (f) (lambda (x) (f x)))

So if that's one, we can probably work out what two is, right?

λfx. f(f(x))

And three?

λfx. f(f(f(x)))

OK, so no peeking now. What's zero?

...

...

λfx. x

It's just ignoring the original function and returning the value it would've been applied to. The function f has been applied to x zero times.

Playing around with the computer

I find there to be two productive ways to play around with the lambda calculus when I've been learning it.

Firstly, and probably more obviously, try plugging around with them in your favourite language that has some sort of anonymous function. Say Python - if we were to write three from above we'd have:

three = lambda f: lambda x: f(f(f(x)))

If I want to test this - to see if it does what I think it does - I just need a function to be f:

increment = lambda x: x + 1

and some value it can be repeatedly applied to:

zero = 0

So then I just plug in those values:

three(increment)(0) #=> 3

We used three variables to hold the values above, but we could just inline them to get to something that looks a little more lambda-y:

(lambda f: lambda x: f(f(f(x))))(lambda x: x + 1)(0) #=> 3

Which translates to:

(λfx. f(f(f(x)))) (λx. x + 1) 0 = 3

We don't have to use zero and increment however - we could count using any values that behave in the required way.2 For instance:

(define increment (lambda (x) (cons '() x)))

(define zero '())

(((lambda (f) (lambda (x) (f x))) increment) zero) ;=> (())
(((lambda (f) (lambda (x) (f (f x)))) increment) zero) ;=> (() ())
(((lambda (f) (lambda (x) (f (f (f x))))) increment) zero) ;=> (() () ())

Playing around with pen and paper

The second way I like to play with lambdas is with pen and paper. The simplicity of the syntax, and the very few transformations allowed on an expression3, mean that it's possible to do the evaluation yourself. Let's try it with the above:

(λfx. f(f(f(x)))) (λx. x + 1) 0
(λx. (λx. x + 1)((λx. x + 1)((λx. x + 1) x))) 0
(λx. (λa. a + 1)((λb. b + 1)((λc. c + 1) x))) 0
(λa. a + 1)((λb. b + 1)((λc. c + 1) 0))
(λa. a + 1)((λb. b + 1)(0 + 1))
(λa. a + 1)((λb. b + 1) 1)
(λa. a + 1)(1 + 1)
(λa. a + 1) 2
(2 + 1)
3

This is fun to try out, and while it's not much help when the expression is relatively simple as the one above, it gets pretty vital for me when I want to discover how more complicated expressions work.

In summary, the computer is great for checking that a lambda expression works, but I prefer to do get the pen and paper out if I want to get a feel for what's going on - for what makes it work.

But ...

But what about the + and 1 and 0 above? I said that there were only functions in the lambda calculus, aren't we still cheating a little bit.

We are. So in the next post let's define increment, add, multiply and maybe even exponentiation in terms of lambdas. Things are certain to get weirder.


  1. I mean, actually these are the natural numbers including zero, not the real numbers 

  2. I am thoroughly in debt to the amazing book The Little Schemer for the inspiration behind this example. 

  3. α-conversion and β-reduction - see the first post 

Lambda Calculus 1 - Syntax

The word 'lambda' comes up more and more the longer you work as a programmer. There it is as a keyword in Python for an anonymous function. Same again in Scheme. Oh look, there it is in Ruby. Look at the logos for Racket, Clojure, MIT. Lambdas everywhere. The interest/obsession goes back to the mathematical roots of Lisp, specifically Alonzo Church's lambda calculus.

Why?

Church was researching the foundations of mathematics - particularly computation. The notation he came up with is a way of expressing any computation at all - if a computer can do it, it can be written in the syntax of the lambda calculus. But, interestingly for us, it is not concerned about how he computer does it; rather it just has some simple rules about what a computer can do. It is, if you like, a very simple declarative programming language.

Syntax

The lambda calculus gets its name from its use of the Greek letter lambda - λ to represent a function that takes a single argument.

After the λ comes the name that that single argument is bound to - say x.

And after that we write a . to say that we're inside the 'body' of the function.

The x is a bound variable - it stands for some value that the function can be applied to.

And to apply a value to a function, you call it by putting them next to each other (maybe with some brackets to make it clearer what's the value and what's the body).

That's it. That's everything in the lambda calculus - it's a syntax for writing about functions of one argument.

So where in JavaScript we have:

x => x + 1

and in Scheme we have

(lambda (x) (+ x 1))

in the lambda calculus syntax we have:

λx.x + 1

Only one argument?

So you might see some limitations here.Only one argument, you may say, what if I need another one? Happily we can just return another function to bind a new argument. Hooray - everything is curryed.

So where in JavaScript:

x => y => x + y

and in Scheme:

(lambda (x) (lambda (y) (+ x y)))

so in the Lambda calculus we have:

λx.λy.x + y

Although usually1 we'd just write:

λxy.x + y

But we would of course remember that, if the function had only one argument applied to it, it would return a function that expected the next argument.

α-conversion and β-reduction

These terms do absolutely nothing to dispell the feeling that the lambda calculus is a bit elitist. Look, even more Greek letters - it must be complicated and clever because just writing about it requires me to know how to say α!

Really though, these are just big words for 'substitution' and 'application', the basics of which you probably picked up in high school algebra.

'α-conversion' (alpha-conversion) just means that we can change the name of a bound variable in a Lambda expression. So if we've got:

λxy.x + y

We can just change all the xs to as

λay.a + y

And the expression hasn't changed its meaning one iota.2

'β-reduction' (beta-reduction) is a little more complicated - what it means is that, when a value is applied to a function, we can substitute all the variables that name that argument with the value the function is applied to. For instance, in JavaScript:

(x => y => x + y)(5)

under β-reduction becomes

y => 5 + y

We unwrap the outer function and replace occurances of its variable with the supplied value. In lambda land:

(λxy. x + y) 5

Becomes

λy. 5 + y

(I threw some parentheses around that other Lambda expression to make it clear that the 5 was getting applied to the whole function and to separate it from the body x + y - there's no hard and fast rules as far as that goes).

Next up - numbers made of functions!


  1. To save on the world's dwindling supply of λ

  2. Aaaargh! Another Greek letter! 

async/await in JavaScript in Five Minutes

When I first heard about async/await in JavaScript I was quite excited. Now I know about it I'm not. Let me explain; instead of doing some Lisp this evening I decided to find out what async/await fuss was about, and I think I can put it in a single sentence.

async/await is syntactic sugar to make Promises look more sequential

That's it. If you have the most basic understanding of Promises in JavaScript then you should be able to use async/await in about thirty seconds. There is nothing surprising here, which is a relief.

async

Think of this as an annotation to a function - a way of saying that, within this lexically scoped block, we'll be living in async/await world.

async function asyncAwaitLand () {
 // blah blah fishcakes
}

await

In async/await world, .then() is spelt await. And it's another annotation, this time to to an expression. What larks. Here it is in Promise-speak:

function normalPromiseLand () {
    Promise.resolve('some value')
        .then(theResultOfAPromise => console.log(theResultOfAPromise))
}

And here's the same thing in nuspeak async/await

async function asyncAwaitLand () {
 const theResultOfAPromise = await Promise.resolve('some value')
 console.log(theResultOfAPromise)
}

Playing nicely with Promises

async functions return Promises, so if you want to start chaining them all together be my guest:

const arrowAsync = async () => {
    return 'the async annotation works with anonymous arrow functions too'
}

arrowAsync()
    .then(string => console.log(string))

Errors and Rejects

But how do you .catch() those long-awaited Promises when they go horribly wrong? Would it surprise you at all if I told you that you just use the normal sequential JavaScript error handling construct of try/catch?

function rejectedPromise () {
    return Promise.reject(new Error('boom'))
}

async function asyncAwaitLand () {
    try {
        const nope = await rejectedPromise()
        console.log('will never happen', nope)
    } catch (error) {
        console.log('I caught a rejected Promise:', error.message)
    }
}

So how do you reject() in an async function? You can guess right? You just throw like it's sequential code.

async function throwingAsync () {
    throw new Error('boom')
}

function promiseLand () {
    throwingAsync()
        .then(nope => console.log('will never happen', nope))
        .catch(error => console.log('I caught an async throw:', error.message))
}

Are you still reading this?

async/await is boring - thank goodness. A pretty piece of syntactic sugaring that extends the language not one jot. If you understand Promises then you should already have some ideas for how you're going to use it - more of a way of tidying up some ugly looking bits of code than a complete rethink of your codebase.

If you don't understand Promises - stop reading articles about async/await and start reading articles about Promises.

Book Review: Clojure for the Brave and True

One line review: Clojure for the Brave and True by Daniel Higginbotham is a pretty good intro to Clojure, if you can get past the undergraduate humour.

Clojure for the Brave and True could be thought of as a part of a loose trilogy of books, including Land of Lisp and Realm of Racket,1 that explore modern Lisps in a light hearted way with cartoons.

The first problem with this comparison is that Brave & True is nowhere near as good as Land of Lisp - Barski's jokes are funnier, his cartoons are better, the content is both deeper and broader. Land of Lisp has a chapter called "Let's build a Web Server from Scratch" and it's not lying. Whereas Brave & True won't even show you the ropes on something like Compojure.

The best chapters in Brave and True, which are also the most useful ones, are the ones where you're being walked through a piece of code line by line. The 'Peg thing' game is a great example of a interactive command-line game written using a series of pure functions. This chapter gives you a real idea of how to get some Clojure code doing stuff in the world - a practical toolkit to let you get writing something.

The other great thing about this book is its opinionated introduction to editors. I struggled mightily setting up something to do my Lisps in, having gone through a variety of Vim and Emacs setups with every damn plugin you can imagine. Brave and True has an entire chapter dedicated to getting a decent Emacs environment (you can download the configuration), complete with Cider and Paredit. It's not going to teach you everything you want to know, but once you're done you will be immediately productive and able to get along with the more serious task of actually writing some Clojure.

But I often found the sense of humour in this book grating. It is as if I was forced to hang around with my fourteen-year-old self.2 The one who'd memorized a lot of Monty Python and The Hitchhiker's Guide to the Galaxy and thought that quoting it back at my friends was the height of sophisticated humour. The examples feel contrived to fit the humour, often to the detriment of the point that is trying to be made.

The poorest chapters are the ones where an idea is introduced but not fully explored. When introduced to protocols and records it would be nice to understand how they are used to leverage polymorphism in something more practical than the contrived Richard-Simmons-as-Werewolf examples that felt even less useful than the usual Object Oriented Guide to Animal Taxonomy we're forced to endure.

Brave and True is a good book, and is worth buying and reading (and if you want to sample the content it's all available on the book's excellent website). It's filled me with confidence to write Clojure (probably before other languages) and to read more books on Clojure. I just wish that it had spent less time crapping around with spurious examples and more time showing me how and why Clojure is the best.

Now I'm going to read my favourite introduction to Lisp again (keep scrolling) and maybe finish Land of Lisp.


  1. Guess they couldn't find an appropriate name for an area starting with 'C'. Castle of Clojure? Continent? Cave

  2. Bah, OK. My 20 year old self too. I can still sing all of The Philosophers' Song, I just know I shouldn't. 

Modularity is its own reward

A while back an friend and I were talking about programming. They'd recently taken it up in the job he was doing and were relatively fresh to the discipline. Asking for a code review from a collegue, they'd been told that their Python was pretty good but that it could do with breaking out into smaller functions.

My friend couldn't see the point of this - the code worked, it did the job, what's the problem? I had a crack at explaining why. Didn't do very well.

More recently I was getting some feedback for something small I'd written. It was, of course, awesome - like all the code I write.1 They said they liked it, but the modularity wasn't entirely necessary because the individual modular parts wouldn't be reused.

So this is just a scratch piece to try to explain why I think modularity is the most imortant thing

Oh and the title is an homage to my favourite xkcd comic:

tail recursion is its own reward

Wait, modularity?

When I say 'modular', I mean small and isolated and independent. A class that's a few lines long, the one line method in Ruby, the short function. At a larger level I mean, well, larger small things. A file with one class in it, or one function in it. Maybe I mean microservices. Maybe not.

Look, I mean small things. That's all. But why do I think that they're their own reward? Well, I don't2 - but I think that, most of the time, more good things come out of keeping the code smaller than larger. Such as...

Easier to Test

What sort of tests do you love? When there are hundreds of them - which ones make you happy inside? In my (admittedly limited) experience, it tends to be the ones that run really quickly and don't randomly fail. Sure, the ones that exercise the whole system are nice and necessary, but the ones that make me smirk a little are the unit tests that whizz by in the blink of an eye.

And in order to have those fast little tests, you need small little bits of code to test.

TDD makes us write the tests first - and the easiest tests to write are the ones that cover single, simple ideas that you want to implement. TDD wants us to write small tests that consequently should lead us to write small pieces of code.

Performing TDD produces code with tests - this is a given. But I find that people celebrate this more that what I think the bigger prize is: you have been forced into writing your code modularly, bringing with it other and possibly greater advantages.

Easier to Comprehend

This is probably the most important one. If your code is small and independent, then there is a much higher chance of you and everyone else understanding what it does. If a single function / class / method is longer than a screen, I would go so far as to say that it's near impossible to understand what it does.

If you're programming in small, easy to comprehend3 parts, then there is more chance that you'll be understood - if only because you'll have had to give them names. Possibly bad names, but names all the same. Names you'll be able to read and know what they mean and so what the parts do. Or at the worst, names that you read, don't understand, and then read the code, understand that because it's short, and then rename it with something (hopefully) better.

Easier to Reuse

Yes, reuse is good - it's a good benefit of small pieces of code. You write that Fibonacci function, you can use it everywhere that you need a Fibonacci number. It is part of the wonderful magic of small, independent things.

Whether you do reuse a part of code is often by the by - it can often come later on when you have a better idea about the thing that you're trying to build.

This all sounds familiar...

Look, if you've heard this before then you probably have - hell, I just worked out what I was talking about when I got to the end of writing it. It's basically the first two bulletpoints of the Unix philosophy - do one thing well, and integrate with other programs.

But as we're not writing programs - we're just writing parts of programs - we don't have to worry about the requirement for generalization (that can come later if at all). That integration is eased internally if we make the modules truely independent of each other.

Like Lego blocks - some are small and general (think a 3x4 flat piece), or small and specific (think a laser gun), but they're all small and easy to integrate with each other.


  1. For 'awesome' read 'adequate'. 

  2. Barely anything is its own reward. Maybe pizza. 

  3. From the Latin com together, and prehendere grasp. It's literally easier to hold stuff together when it's small.