Lambda Logic

layout: post title: "Lambda Calculus 3 - Logic with Church Booleans" date: 2017-10-21 20:54:21 tags: - Mathematics - Logic - Functional Programming - Lambda Calculus

published: true

I found Church numbers pretty tough, and I'm still not sure I fully understand them. But this post should be a little bit easier. I promised logic for this post, and logic I will give you. But not right now. First, it's...

Data structures with functions

As modern 21st century software developers, we're used to a strong divide between data and process. Even with object orientation we consider an object to be made of things it knows (the data) and things it does (the methods).1

I always picture my programs as big old conveyor belts, where my program-workers each beat the hell out of a piece of data as it goes past, until it comes out as a shiny new BMW piece of JSON.

Meanwhile, here in the lambda calculus... well, we've got numbers for sure. But how can we get data structures like a piece of shiny JSON? Or even just a list?

Take heart - through the lambda, all things are possible!2

Let's try the simplest of all data structures - the pair. Simple, sure - but powerful. Every good Lisper knows that the can build any data interface you can conceive using enough pairs. If Lisp was written by Archimedes, he'd say "Give me a place to stand and enough cons cells and I shall move the earth".

To get a pair as a data structure up and flying, we need three functions. One to make a pair out of two arguments, one that returns the first item in the pair, and another that returns the second item. Lispers will say "Ah! cons, car and cdr!", but we will say $pair$, $first$ and $second$.

First $pair$:

$$ pair\quad \equiv \quad \lambda p.\lambda q.\lambda f.\ f\ p\ q $$

We take the two things we're pairing - that much makes sense - but then we take one more argument and apply it, first to the first argument, and then the result of that to the second argument.

What's going on?

We want something like $ first < aPair > $ to give us back the first item that we gave to the pair. So the final $f$ in pair is going to be offered both $p$ and $q$ and left to decide which one it wants.

$$ first\quad \equiv \quad \lambda pair. pair\ (\lambda a.\lambda b.\ a) $$

$first$ takes a pair, and then gives that pair a function that takes two arguments and returns... well, the first one. Given that, it's a doddle to write $second$:

$$ second\quad \equiv \quad \lambda pair. pair\ (\lambda a.\lambda b.\ b) $$

Same again, but this time we evaluate to the second of the arguments.

Feel free to stick it into a programming language and play with it. Here it is in Scheme:

(define pair
    (lambda (p)
        (lambda (q)
            (lambda (f)
                ((f p) q)))))
(define first
    (lambda (p) (p (lambda (a) (lambda (b) a)))))
(define second
    (lambda (p) (p (lambda (a) (lambda (b) b)))))

Booleans

What is truth? said jesting Pilate, and would not stay for an answer. — Francis Bacon, On Truth

Well, I hope you do stay for an answer. And there is an answer, at least for the concerns of the lambda calculus, but it's not going to be particularly life altering. In fact, it's probably going to remind you most of the solution to what a number is.

Numbers... AGAIN!

When we defined numbers a few posts ago, I was being fairly adamant that the best way to encode a number using functions would be to count the number of applications of a function. Remember - $zero$ was no applications, $one$ was one, etc.

But this ain't necessarily so - we could implement numbers using the definition of pairs above:

$$ zero \quad \equiv \quad \lambda x\ x

succ \quad \equiv \quad \lambda n.\ pair\ \ n $$

Here we've defined $zero$ as the identity function, and $succ$, the successor, as a pair of $<FALSE>$ to whatever the previous number was. Each number is now 'counting' using the number of times that $zero$ has been paired up with $<FALSE>$.

We can now go on to define other functions around this implementation - and we will - but the key thing I'd like to stress is that what makes a number a number isn't really what it is, but rather how it behaves - how it behaves when being used with other functions like $add$ and $multiply$.

Let's use that insight to imagine what $true$ and $false$ might be.

if ... then ... else

Every programming language I've ever worked in has some sort of if expression or statement - a way of choosing one bit of code or another based on whether something was true or false. You know:

if (1 + 1 == 2)
    puts "One and one is two!"
else
    puts "Maths is broken!"
end

We could think of if as being a function in the lambda calculus:

$$ if \quad \equiv \quad \lambda bool.\lambda t.\lambda f.\ $$

This is fine, but gets us nowhere. But what if there were two different functions, one of which we used for true booleans, and one of which we used for false ones. Yes, I know, that would make no sense - you'd have to know which one to use. But humour me.

$$ if-true \quad \equiv \quad \lambda bool.\lambda t.\lambda f.\ t

if-false \quad \equiv \quad \lambda bool.\lambda t.\lambda f.\ f $$

We're not even using the boolean any more, we're just saying that if the boolean is true, we evaluate to first argument, and if it's false we evaluate to the second argument.

If we're not using the boolean, we can get rid of it from the end.

$$ true \quad \equiv \quad \lambda a.\lambda b.\ a

false \quad \equiv \quad \lambda a.\lambda b.\ b $$

And there we have it. We can just say that $true$ is the function that returns the first, and $false$ is the function that returns the second.

Wait, can we? Well, why not? All we need really is a function that will signal one of two things - true or false we can call them. This 'signal' we choose to be the return of the first or the second of the arguments it is applied to. Who cares how truth works - this is a mechanism that does what it needs to do.

Truth Tables

And

So now we've got truth going, let's have some fun defining some boolean operations. First, an easy one - $and$. And once again we're going to use the behaviour of $and$ to give us a clue as to the implementation.

What' s the behaviour? This might be easier to do if we construct a truth table. What's a truth table I hear you cry? Well, in logic we can draw up a table showing the truth or falsity of a proposition (sentence that is either true or false) based upon the truth or falsity of the propositions from which it is composed.

The truth table is just the exhaustive table of true and false values that can exist in the proposition, along with the resulting truthfulness of the overall proposition. A proposition involving 'and' will be made of two sub-propositions - the two being 'anded' together. Traditionally these are written as '$P$' and '$Q$' - and who are we to disagree with tradition? The symbol '$\land$' is often used for 'and', so we'll do the same here. Finally, true and false will be '$T$' and '$F$'.

$$ \begin{array}{ c c | c } P & Q & P \land Q \\ \hline T & T & T \\ T & F & F \\ F & T & F \\ F & F & F \end{array} $$

What can we learn from this? Well, two things:

  • If $P$ is false, then the proposition is always false.
  • If $P$ is true, then the proposition has the same value as $Q$

So we could say something like "if $P$ then $Q$ else $false$". Which can be written quite as:

$$ and \quad \equiv \quad \lambda p.\lambda q.\ p\ q\ false $$

Or, even more concisely:

$$ and \quad \equiv \quad \lambda p.\lambda q.\ p\ q\ p $$

As if $p$ is false we can just evaluate to $p$

Or

'Or' is represented by '$\lor$':

$$ \begin{array}{ c c | c } P & Q & P \lor Q \\ \hline T & T & T \\ T & F & T \\ F & T & T \\ F & F & F \end{array} $$

The pattern here should be clearer after doing $and$

  • If $P$ is true, then the proposition is always true.
  • If $P$ is false, then the proposition has the same value as $Q$

Which can be written in lambdas as:

$$ or \quad \equiv \quad \lambda p.\lambda q.\ p\ p\ q $$

Not

'Not' is nice and short as a truth table. We will use $\lnot$ to represent it

$$ begin{array}{ c | c } P & \lnot P \\ \hline T & F \\ F & T \end{array} $$

We just need to flip $P$ around - if it was true (returning the first argument), we make it return false (return the second argument), and vice versa.

Like this:

$$ not \quad \equiv \quad \lambda p.\lambda a.\lambda b.\ p\ b\ a $$

if ... then

$$ \begin{array}{ c c | c } P & Q & P \limp Q \\ \hline T & T & T \\ T & F & F \\ F & T & T \\ F & F & T \end{array} $$

$$ implies \quad \equiv \quad \lambda p.\lambda q. p\ q\ \p $$

if and only if

$$ \begin{array}{ c c | c } P & Q & P \liff Q \\ \hline T & T & T \\ F & F & T \\ F & T & F \\ T & F & F \end{array} $$

$$ true \quad \equiv \quad \lambda a.\lambda b.\ a $$

$$ false \quad \equiv \quad \lambda a.\lambda b.\ b $$

$$ and \quad \equiv \quad \lambda p.\lambda q.\ p q p $$

$$ or \quad \equiv \quad \lambda p.\lambda q.\ p p q $$

$$ if \quad \equiv \quad \lambda p.\lambda a.\lambda b.\ p a b $$


  1. I will be the first to admit that this is a terrible definition of OO, mainly driven by my ignorance. 

  2. All things not necessarily possible with the lambda. Terms and conditions apply. 

Lambda Calculus 3 - Arithmetic with Church Numbers

Previously I've posted about the lambda calculus and Church numbers. We'd shown how we can encode numbers as functions using the Church encoding, but we'd not really shown how we could do anything with those numbers.

But before we get into it, let's clear up some stuff about brackets...

Left association and you

Just as it's easier to write $\lambda nfx.$ than $\lambda n.\lambda f.\lambda x.$ because we make the assumption that each application of the function returns a new function, so there is a way of writing out function application without having to use lots of parentheses.1

Where we would normally write

$$ f(x) $$

with parentheses, we could instead write

$$ f\ x $$

under the assumption that each argument associates to the one on the left. So if we had

$$ ((f(x)\ y)\ z) $$

we can write it as

$$ f\ x\ y\ z $$

and something like

$$ (g(x)\ f(x)) $$

is

$$ g\ x\ (f\ x) $$

As we still need the parentheses to make sure that the $f$ and $x$ get bundled together. We'll need this convention as we go on as things are going to get a little more parenthesis-heavy.

Add-one

OK, let's get back to the arithmetic.

Say we have the number three:

$$ three \ \equiv \ \lambda f \lambda x.\ f\ (f\ (f x)) $$

(the function $f$ applied to $x$ three times)

And we wanted to get to the number four:

$$ four \ \equiv \ \lambda f \lambda x.\ f\ (f (f\ (f x))) $$

(the function $f$ applied to $x$ four times)

How do we get from $three$ to $four$? Well, the difference is that we just need to apply $f$ one more time.

$$ four \ \equiv \ f\ three $$

We can encode the idea of applying $f$ one more time into a lambda function. We could call it $add-one$ or $increment$ but lets go with $succ$ for 'successor'.

$$ succ \ \equiv \ \lambda n. \lambda f. \lambda x.\ f\ (n\ f\ x) $$

The $n$ is the number we're adding one to - we need to bind in the values of $f$ and $x$ in to the function because they'll need to have $n$ applied to them before we can apply $f$ in the one extra time.

Another way to think of this is that the general signature for a number is $\lambda f. \lambda x.$, and that when we apply $succ$ to a number, we need to get back another number - something else with the signature of $\lambda f. \lambda\ x.$

So the signature of $succ$ - and consequently any unary operation on a number - is $\lambda n.\lambda f.\lambda x$, where $n$ is the number being changed.

In Clojure that looks like:

(fn [n] (fn [f] (fn [x] (f ((n f) x))))))

Yeah, it's a bit verbose in comparison to the lambda calculus version.2 All those parentheses, while great for being explicit about which functions get applied to what, makes it a bit tough on the eyes.

What about Haskell?

\n f x -> f (n f x)

Bit more like the original, eh? Haskell has currying and left-association baked in to its syntax so its lambda expressions look almost exactly the same as the lambda calculus ones. You can see why it's so popular.3

Addition

Let's see if we can define addition.

First off, $addition$ is an operation that takes two arguments, two numbers. So we know it needs to look something like:

$$ \lambda m. \lambda n. \lambda f. \lambda x. $$

Where $m$ and $n$ are the numbers being added together. Now all we need to do is work out what comes after the dot.

We could define it in terms of $succ$ - all we need to do is apply $succ$ $m$ many times to $n$:

$$ \lambda m.\lambda n.\lambda f.\lambda x.\ m\ succ\ n\ f\ x $$

And this works,4 but we could probably write something both more intuitive and simpler.

What do we want as the result of $add$? We want a function that applies $f$ to $x$ $n$ many times, and the applies $f$ to the result of that $m$ many times.

$$ add\ (\lambda fx.\ f\ (f\ x))\ (\lambda fx.\ f\ (f\ (f\ x))) = \lambda fx.\ f\ (f\ (f\ (f\ (f\ x)))) $$

We can just write that out with the variables we've been given - first apply $f$ to $x$, $n$ many times.

$$ n\ f\ x $$

and then apply $f$ to that result $m$ many times

$$ m\ f\ (n\ f\ x) $$

giving us

$$ add\ \equiv\ \lambda n.\lambda m.\lambda f.\lambda x.\ m\ f\ (n\ f\ x) $$

The order of $n$ and $m$ doesn't matter as they're just the order in which the number of $f$s are applied.5

Multiplication

We've used the word 'times' a lot here when talking about the application of $f$ onto $x$s in the above. But now we'll have to deal with real multiplication.

Before you try and reach at an answer, step back a little and ask yourself what the result ought to be, and what the Church arithmetic way of describing it would be.

Say we had the numbers two and three. If I was back in primary school I'd say that the reason that multiplying them together made six was because six was 'two lots of three' or 'three lots of two'.

So when I want to put this into Church arithmetic, I feel like saying 'six is the application of three lots of the application of two lots of the application of $f$ onto $x$'. Which is a mouthful, for sure, but looks like

$$ six\ \equiv\ \lambda f.\lambda x.\ ((three\ (two\ f))\ x) $$

or, without the parentheses

$$ six\ \equiv\ \lambda f.\lambda x.\ three\ (two\ f)\ x $$

$two\ f$ is a function that applies $f$ two times to whatever it's next argument is. $three\ (two\ f)$ will apply $two\ f$ to its next argument three times. So it will apply it $3\ \times\ 2$ times - 6 times.

And so now we can move from the particular case to the general case; multiplication is:

$$ mult\ \equiv\ \lambda m.\lambda n.\lambda f.\lambda x.\ m\ (n\ f)\ x $$

"$m$ lots of ($n$ lots of $f$) applied to $x$", which is still a mouthful but

Exponentiation

So what could exponentiation be? Well, the first thing we know is that this time, order is going to be important - $2^3$ is not the same as $3^2$.

Next, what does exponentiation mean? I mean, really mean? When we did multiplication we saw us doing 'two lots of (three lots of $f$)'. But now we need to do 'two lots of something' three times. The 'three' part has to apply, not to the number of times we do an $f$, nor the number of times we do '$n$ lots of $f$'. But rather it needs to be the number of times we do $n$ to itself.

Woah.

So if 'three' is the application of $f$ three times to $x$, we can say that $2^3$ is the application of $two$ three times to $f\ x$.

Even. Bigger. Woah.

Another way to look at it: a Church number is already encoding some of the behaviour of exponentiation. When we use inc and 0 as f and x we can think of the number n acting as $inc^n$ - inc done to itself n many times.

This is more explicit if we try it with something other than increment - say double, aka 'times two'. Let's do it in Haskell - but please feel free to pick any language you like.

let timesTwo = \x -> 2 * x
let four = \f x -> f(f(f(f x)))

four timesTwo 1 -- 16

Four lots of timesTwo is 16; all we need to do is to use the number two instead, and apply the result to an f and an x.

let two = \f x -> f(f x)
four two succ 0 -- 16

Sixteen again.

So function for exponentiation - m to the power of n - is:

$$ exp\ \equiv\ \lambda m.\lambda n.\lambda f.\lambda x.\ n\ m\ f\ x $$

But remember $\eta$-reduction? We can just go directly to:

$$ exp\ \equiv\ \lambda m.\lambda n.\ n\ m $$

This is because you know the function you're left with after you've applied $n$ to $m$ is a number - will take an $f$ and an $x$ - you don't need to explicitly bind them in the outer function just in order to pass them unchanged to the inner one.

But that's just a nicety. The important thing is... we've finished!

Summary and Next!

This post looked at some simple arithmetic using Church numerals. We saw successor

$$ succ\quad \equiv\quad \lambda n.\lambda f.\lambda x.\ f\ (n\ f\ x) $$

addition:

$$ add\quad \equiv\quad \lambda m.\lambda n.\lambda f.\lambda x.\ m\ f\ (n\ f\ x) $$

multiplication:

$$ mult\quad \equiv\quad \lambda m.\lambda n.\lambda f.\lambda x.\ m\ (n\ f)\ x $$

and exponentiation:

$$ exp\quad \equiv\quad \lambda m.\lambda n.\ m\ n $$

An interesting relationship between the last three: the $f$ moves along to the right as the operation becomes 'bigger'.

Next post we'll be taking a short break from arithmetic to take a look at logic using the lambda calculus.


  1. And I'm speaking as a mad Lisp fan, lover of parens where ever they are. 

  2. But still terse compared to the mess we'd get in Python. Or Ruby. Yeah, don't try it in Ruby. Oh, and I guess we could use the short hand anonymous function syntax, but I think that'd look even messier... 

  3. For functional programming that is. 

  4. Get your pencil and paper out if you want to prove it! 

  5. The same will go for multiplication. We know that this has to be the case if we're representing these numbers and operations correctly as they should display the commutative property 

Lambda Calculus 2 - Church Numbers

Counting without numbers?

In the first post I wrote about the lambda calculus we looked at the basic syntax and a simple function that took two numbers and added them together:

$$ \lambda x.\lambda y.x + y $$

This might all look OK until I tell you that, in the untyped lambda calculus, the only primitive data type is a function.

A function. Not a bit, a byte, a string; not a number - a function.

So we should be a little suspicious of $\,\lambda x.\lambda y.x + y$ as this $+$ symbol needs to be defined as a function. Fair enough - addition feels like a the sort of thing that could easily be a function.

But what would we apply to it? We need a number - like one or two. But we need to make them out of functions.

Wait, what? We need to make numbers out of functions?!

And this is where things start to get weird.

So what's a number anyway?

You will now be inducted into a sacred mystery that will allow you to make and understand Lisp jokes on the Internet. Be brave.

In a universe which only has functions - how would we count? Well, we'd have to do it with functions, obviously.

OK, sure - but that's not really getting us anywhere - let's take $2$ as a concrete example. How do I write a function that represents $2$?

Simple - we could just give it a name - like it was JavaScript:

const two = () => {}

Now you're obviously shouting "But that's cheating! What are these 'names' of which you speak? Are they made of functions too?"

And you'd be right. The thing is, we don't just want a symbol for $2$ - the numeral. What we want is a function that represents, in some way, the very essence of two-ness.

What I'm trying to get across here (without jumping to the solution immediately) is that the representation of numbers in the lambda calculus are not mere symbols; they capture a key behaviour that we associate with the idea of 'number'.

And that behaviour is repetition. When we say 'look at those two apples', we're expecting there to be an apple, and then another apple. In Church arithmetic a number is represented by a function that takes two values, and then applies the first value to the second value $n$ times, where $n$ is the number being represented.

Church numbers

So much for the theory, let's take a look at some real numbers.1 First up, the number one:

$$ \lambda f.\lambda x. f(x) $$

We accept a variable called $f$, we take another one called $x$, and we call $f$ with $x$ once. We're kinda hoping that $f$ turns out to be a function that takes a single argument, but as this is the lambda calculus and everything is a function that takes a single argument, we can be fairly absolutely certain it is.

In JavaScript:

f => x => f(x)

And Scheme:

(lambda (f) (lambda (x) (f x)))

So if that's one, we can probably work out what two is, right?

$$ \lambda fx. f\ (f\ x) $$

And three?

$$ \lambda fx. f\ (f\ (f\ x)) $$

OK, so no peeking now. What's zero?

...

...

...

...

...

Did you work it out?

$$ \lambda fx. x $$

It's just ignoring the original function and returning the value it would've been applied to. The function $f$ has been applied to $x$ zero times.

Function composition and functional powers

There's a neater way of talking about this, with just a litte more notation. We can abstract the idea of "first apply $f$, then apply $f$ to that result, then apply $f$..." out to the idea of function composition, which you might have seen in some languages. This is just building a new function out of two functions. The new function does exactly the same thing as calling one function after the other (in right-to-left order). This is usually written using the $\circ$ symbol:

$$ (f \circ f)\ x = f\ (f\ x) $$

And so three could become the (slightly) easier to read:

$$ \lambda f.\lambda x.\ (f \circ f \circ f)\ x $$

But we can go further. The idea of composing a function with itself so-many times can be written as a functional power - a 'function to the $n$'. Just like we can say $2^3 = 8$, so we can say:

$$ \lambda f.\lambda x.\ (f\ (f\ (f\ x)))\quad=\quad \lambda fx. (f\circ f\circ f)\ x\quad=\quad \lambda fx. f^3\ x $$

Implementing composition and powers of functions is left as an exercise for the reader in their favourite language.2

Playing around with the computer

I find there to be two productive interesting ways to play around with the lambda calculus and Church numbers when I'm trying to understand what's going on.

Firstly, and probably more obviously, try plugging around with them in your favourite language (as long as it has some sort of anonymous function type). Say Python - if we were to write three as a Church numeral we'd have:

three = lambda f: lambda x: f(f(f(x)))

If we want to test this - to see if it does what we think it does - we just need a function to be $f$:

increment = lambda x: x + 1

and some value to play the role of $x$

zero = 0

So then we just plug in those values:

three(increment)(0) #=> 3

We used three variables to hold the values above, but we could just inline them to get to something that looks a little more lambda-y:

(lambda f: lambda x: f(f(f(x))))(lambda x: x + 1)(0) #=> 3

Which translates to:

$$ (\lambda f.\lambda x.\ f\ (f\ (f\ x)))\ (\lambda x. x + 1)\ 0 = 3 $$

Both zero and increment use Python's built in number system. But we don't have to use numbers to test Church numbers. All we need are values that behave in the required way.3 For instance:

(define inc (lambda (x) (cons '() x)))

(define zero '())

(((lambda (f) (lambda (x) (f x))) inc) zero) ;;=> (())
(((lambda (f) (lambda (x) (f (f x)))) inc) zero) ;;=> (() ())
(((lambda (f) (lambda (x) (f (f (f x))))) inc) zero) ;;=> (() () ())

Lots of things will work - experiment!

Playing around with pen and paper

The second way I like to play with lambdas is with pen and paper. The simplicity of the syntax, and the very few transformations allowed on an expression,4 mean that it's possible to do the evaluation yourself. Let's try it with the above. I'm going to go through it step by step, and I'm going to note which of the reductions steps being performed at each line.

$$ \begin{array}{ c | l | c} & Expression & Reduction \\ \hline 0 & (\lambda fx.\ f\ (f\ (f\ x)))\ (\lambda x. x + 1)\ 0 \\ 1 & (\lambda x.\ (\lambda x.\ x + 1)\ ((\lambda x.\ x + 1)\ ((\lambda x.\ x + 1)\ x)))\ 0 & \beta \\ 2 & (\lambda x. (\lambda a.\ a + 1)\ ((\lambda b.\ b + 1)\ ((\lambda c.\ c + 1)\ x)))\ 0 & \alpha \\ 3 & (\lambda a.\ a + 1)\ ((\lambda b.\ b + 1)\ ((\lambda c.\ c + 1)\ 0)) & \beta \\ 4 & (\lambda a.\ a + 1)\ ((\lambda b.\ b + 1)\ 1) & \beta \\ 5 & (\lambda a.\ a + 1)\ 2 & \beta \\ 6 & 3 & \beta \end{array} $$

This is fun to try out, and while it's not much help when the expression is relatively simple as the one above, it gets pretty vital for me when I want to discover how more complicated expressions work.

In summary, the computer is great for checking that a lambda expression works, but I prefer to do get the pen and paper out if I want to get a feel for what's going on - for what makes it work.

But ...

But what about the $+$ and $1$ and $0$ above? I said that there were only functions in the lambda calculus, aren't we still cheating a little bit?

We are. So in the next post let's define increment, add, multiply and maybe even exponentiation in terms of lambdas. Things are certain to get weirder.

Oh, and - you were promised the ability to undestand jokes on the internet? Take a look at this - hopefully it makes some sense now you know what $one$ is.


  1. I mean, actually these are the natural numbers including zero, not actually the real numbers. Yay, maths joke! 

  2. I've always wanted to say that. But, seriously - implement them. It's fun! 

  3. I am thoroughly in debt to the amazing book The Little Schemer for the this example. 

  4. $\alpha$-conversion and $\alpha$-reduction - see the first post 

Lambda Calculus 1 - Syntax

The word 'lambda' comes up more and more the longer you work as a programmer. There it is as a keyword in Python for an anonymous function. Same again in Scheme. Oh look, there it is in Ruby. Look at the logos for Racket, Clojure, MIT. Lambdas everywhere. The interest/obsession goes back to the mathematical roots of Lisp, specifically Alonzo Church's lambda calculus.

Why?

Church was researching the foundations of mathematics - particularly computation. The notation he came up with is a way of expressing any computation at all - if a computer can do it, it can be written in the syntax of the lambda calculus. But, interestingly for us, it is not concerned about how he computer does it; rather it just has some simple rules about what a computer can do. It is, if you like, a very simple declarative programming language.

Syntax

The lambda calculus gets its name from its use of the Greek letter lambda - $\lambda$ to represent a function that takes a single argument.

After the $\lambda$ comes the name that that single argument is bound to - say $x$.

And after that we write a $.$ to say that we're inside the 'body' of the function.

The $x$ is a bound variable - it stands for some value that the function can be applied to.

And to apply a value to a function, you call it by putting them next to each other (maybe with some brackets to make it clearer what's the value and what's the body).

That's it. That's everything in the lambda calculus - it's a syntax for writing about functions of one argument.

So where in JavaScript we have:

x => x + 1

and in Scheme we have

(lambda (x) (+ x 1))

in the lambda calculus syntax we have:

$$ \lambda x.x + 1 $$

Only one argument?

So you might see some limitations here.Only one argument, you may say, what if I need another one? Happily we can just return another function to bind a new argument. Hooray - everything is curryed.

So where in JavaScript:

x => y => x + y

and in Scheme:

(lambda (x) (lambda (y) (+ x y)))

so in the lambda calculus we have:

$$ \lambda x.\lambda y.x + y $$

Although usually1 we'd just write:

$$ \lambda xy.x + y $$

But we would of course remember that, if the function had only one argument applied to it, it would return a function that expected the next argument.

$\alpha$-conversion, $\beta$-reduction

These terms do absolutely nothing to dispell the feeling that the lambda calculus is a bit elitist. Look, even more Greek letters - it must be complicated and clever because just writing about it requires me to know how to say $\alpha$!

Really though, these are just big words for 'substitution' and 'application', the basics of which you probably picked up in high school algebra.

'α-conversion' (alpha-conversion) just means that we can change the name of a bound variable in a Lambda expression. So if we've got:

$$ \lambda xy.x + y $$

We can just change all the $x$s to $a$s

$$ \lambda ay.a + y $$

And the expression hasn't changed its meaning one iota.2

'β-reduction' (beta-reduction) is a little more complicated - what it means is that, when a value is applied to a function, we can substitute all the variables that name that argument with the value the function is applied to. For instance, in JavaScript:

(x => y => x + y)(5)

under β-reduction becomes

y => 5 + y

We unwrap the outer function and replace occurances of its variable with the supplied value. In lambda land:

$$ (\lambda xy. x + y)\ 5 $$

Becomes

$$ \lambda y. 5 + y $$

(I threw some parentheses around that other Lambda expression to make it clear that the $5$ was getting applied to the whole function and to separate it from the body $x + y$ - there's no hard and fast rules as far as that goes).

Next up - numbers made of functions. Wait, what?


  1. To save on the world's dwindling supply of $\lambda$s 

  2. Greek alphabet pun. BOOM! 

async/await in JavaScript in Five Minutes

When I first heard about async/await in JavaScript I was quite excited. Now I know about it I'm not. Let me explain; instead of doing some Lisp this evening I decided to find out what async/await fuss was about, and I think I can put it in a single sentence.

async/await is syntactic sugar to make Promises look more sequential

That's it. If you have the most basic understanding of Promises in JavaScript then you should be able to use async/await in about thirty seconds. There is nothing surprising here, which is a relief.

async

Think of this as an annotation to a function - a way of saying that, within this lexically scoped block, we'll be living in async/await world.

async function asyncAwaitLand () {
 // blah blah fishcakes
}

await

In async/await world, .then() is spelt await. And it's another annotation, this time to to an expression. What larks. Here it is in Promise-speak:

function normalPromiseLand () {
    Promise.resolve('some value')
        .then(theResultOfAPromise => console.log(theResultOfAPromise))
}

And here's the same thing in nuspeak async/await

async function asyncAwaitLand () {
 const theResultOfAPromise = await Promise.resolve('some value')
 console.log(theResultOfAPromise)
}

Playing nicely with Promises

async functions return Promises, so if you want to start chaining them all together be my guest:

const arrowAsync = async () => {
    return 'the async annotation works with anonymous arrow functions too'
}

arrowAsync()
    .then(string => console.log(string))

Errors and Rejects

But how do you .catch() those long-awaited Promises when they go horribly wrong? Would it surprise you at all if I told you that you just use the normal sequential JavaScript error handling construct of try/catch?

function rejectedPromise () {
    return Promise.reject(new Error('boom'))
}

async function asyncAwaitLand () {
    try {
        const nope = await rejectedPromise()
        console.log('will never happen', nope)
    } catch (error) {
        console.log('I caught a rejected Promise:', error.message)
    }
}

So how do you reject() in an async function? You can guess right? You just throw like it's sequential code.

async function throwingAsync () {
    throw new Error('boom')
}

function promiseLand () {
    throwingAsync()
        .then(nope => console.log('will never happen', nope))
        .catch(error => console.log('I caught an async throw:', error.message))
}

Are you still reading this?

async/await is boring - thank goodness. A pretty piece of syntactic sugaring that extends the language not one jot. If you understand Promises then you should already have some ideas for how you're going to use it - more of a way of tidying up some ugly looking bits of code than a complete rethink of your codebase.

If you don't understand Promises - stop reading articles about async/await and start reading articles about Promises.