Part of the series: Combinatory Programming

# Combinatory Programming

## To The Programmer

A combinator is a kind of function. Specifically, it’s a function that applies its arguments—and only its arguments—to each other in a particular shape and order. The number of possible shapes is of course infinite; but in practice a few fairly simple shapes crop up more often than others, and those specific shapes have names. A very few of them are so common and so famous that their names and shapes are already well-known to programmers; because they’re well-known there are often functions available in standard libraries that apply their arguments in shapes corresponding to them. Function composition is one: many languages available today recognize that f(g(x)) is a sufficiently common pattern, and that, for instance, writing xs.map(x => f(g(x))) is sufficiently common, inconvenient, and at times error-prone, that they allow the programmer to write xs.map(compose(f, g)) instead.

It therefore seems worthwhile to push at the boundaries of this set of well-known shapes; are there others that crop up often enough that we can give them names, and in so doing abstract away some of the repetitive guts of our code?

## The Field

Combinatory logic—the field from which we draw the name and, loosely, the concept of combinators—is adjacent to computer programming in several different directions. It’s often of interest to computer programmers, though usually because those programmers are interested in logic itself, recreational mathematics, or programming language design; thus much of the available material, if you squint, can be applied to how we write our programs and whether we can write them better—but you’ve got to squint. The material is rarely straightforwardly applicable for the everyday programmer.

It is, for instance, often written in a minimal lambda calculus syntax—$$\lambda{}xy.xyy$$ and the like—which is more readable to the theorist. It often also dwells on minimal systems, like the SKI calculus, whose simple elements are sufficient for the construction of arbitrarily complex expressions. A useful and intriguing theoretical basis but no more useful to the working programmer than learning how to write programs for a Turing machine.

We must also talk about The Bird Book. Smullyan’s To Mock a Mockingbird is doubtless the most popular treatment of combinatory logic. It’s beloved to many people who love math and logic puzzles, programmers and non-programmers. In it, we’re introduced to the various combinators under the guise of birds whose names begin with the same letter as their traditional academic names. 𝝟, for instance, is introduced as the Kestrel. Instead of dealing with function application, Smullyan’s birds say each other’s names.

Is this a good idea? From a pedagogical perspective, I’m not sure; the metaphor has absolutely never landed with me and the actual book is written with lots of other logical and mathematical verbiage, so I don’t know how far it gets you.

Where I do feel on firmer footing is to claim that whatever the merits of calling $$\lambda{}xy.x$$ “Kestrel” in the context of recreational mathematics, that name becomes even less useful in the context of writing computer programs.

In the modern era, Conor Hoekstra has probably done the most to document and popularize the use of combinators within computer programming—or at least to view computer programming and programming languages according to the extent to which they enable the use of combinatory function application forms, and the extent to which that use results in more concise, more elegant programs. He is the maintainer of https://combinatorylogic.com, which has quite a few resources worth reading, listening to, and watching (many of which he himself has produced).

Hoekstra is an array language enthusiast. This is not a coincidence: one of the most dramatic effects obtainable in programs through the use of combinators is that of tacit programming, that is, code that “does not refer to values by name”, and it’s in the discourse around the array languages—starting with J and expanding to the wider family of so-called “Iversonian” languages—that one most reliably encounters discussion and celebration of tacit forms.

And it’s Hoekstra who has most effectively beat the drum that the array languages’ so-called trains—one of the critical ways that those languages enable tacit code—are best understood as extremely concise syntactic realizations of particular and particularly useful combinators—in the case of J, for instance, the 𝗦 and 𝝫 combinators, among others.

The work that follows can be understood as an attempt to continue in the direction in which he has set out; in part by embracing that particular class of functions as worthy of consideration in the writing of computer programs, and in part by more completely separating the topic of combinators from the array-language context in which Hoekstra most often discusses them.

Hoekstra is an array language enthusiast (as am I) and holds that the array languages are by far the best place to make full use of combinators in programming; I don’t disagree, but I think that in order to present them clearly to the wider community and to hopefully convince that community of their utility, it’s worth going further than he has in presenting them in as “neutral” a context as possible; the array languages are fascinating and powerful, but also forbidding and at times obscure.

I want to try to isolate these concepts not just to make them as accessible as possible: it also remains an open question whether a function for 𝝫 would be of use to the programmer in a more “generic” context like, say, a Javascript codebase that made sufficient use of higher-order functions—or whether its utility only emerges in the full syntactic and operational context of a language like J.

## Names

I want to propose that the value of tacit programming is analogous to the value of functional programming: when applied in the right places, it allows the programmer to “abstract away” repetitive behaviours consisting of many moving parts into operations that are conceptually simpler, more uniform, and can be applied to a wide variety of situations. When those core operations have been intuitively grasped, the resulting code is easier to understand; and the appearance of fewer of those moving parts means fewer places for bugs.

Stripped of all of its theoretical apparatus, the higher-order function map—well-known to many programmers, functional and otherwise—is “just” the simplest form of structure-preserving iteration. Nevertheless, its value in an everyday programming context is that this form of iteration is easy to conceptualize, form a mental model of, and predict the behaviour of. Critically, incorporating the universal name map into our vocabulary allows us greater economy with names inside of our code. Observing a paradigmatic invocation of map:

const adjustedScores = []
for (const score of scores) {
}


becomes

const adjustedScores = scores.map(Math.abs)


We see that we have saved quite a bit of typing. But we have also eliminated one name, score, entirely; the conceptual inventory of our code has decreased by precisely one. We might even then say that the number of names eliminated from our code, without any of its sense being lost, is a crude measure of the degree to which we have been able to abstract it.

Just as map—once we can recognize the patterns in our code that it can replace—allows us to simplify both the code’s text and its conceptual inventory, so too do the combinators: giving universal names to certain fixed patterns of function application, in exchange for a greater economy of the local, domain-specific names in our code.

## Vocabulary

So we come to the problem of names. I’ve been using bold sans-serif single letters up till now, because that’s what Haskell Curry and WVO Quine did in the literature of combinatory logic, following Schönfinkel. These are all well and good in a theoretical context. But to our purposes, the name 𝗕 is no better than Smullyan’s Bluebird. Neither one gives us the slightest conceptual toehold when it comes to remembering what it does. So we call it function composition instead; the relationship between “composition” and f(g(x)) is not unimpeachably pellucid but it’s a start.

The way forward is less obvious with the more exotic patterns. In J, we call 𝝫 a “fork”, because that’s what a diagram of its function application looks like; in Haskell, they call it liftA2, for category-theoretic reasons that we won’t go into. Fork isn’t that bad, but it’s not terribly suggestive; and it also means something rather concrete in most computer systems.

Of the many, many combinators in the forest, there are some that are useful to programmers. These are the ones that correspond to the most common patterns of function application that your average programmer tends to encounter, and thus, the ones that it’s worthwhile to give names to. A very few—I count identity and compose—are so common that they have names that are widespread and sufficiently meaningful. As to the rest: we might imagine a world where, someday, they’re part of the standard library of every programming language. To get there, their corresponding patterns need to be trivially recognizable by any programmer, as trivially as x => x. To get there: I can’t escape the conclusion that we need to give them better names. Names that relate concretely, or as concretely as possible, to what they do; names that are concise but not at the expense of their meanings.

To go with those names:

• We can describe the function application patterns in a function syntax that is more recognizable to everyday programmers. I will use Javascript’s arrow syntax (see x => x above), not because I particularly like it, but because it’s widespread.
• We can offer motivations for their use: not as theoretical bases of formal systems, or as recreation, but in something approximating actual programs.1

What follows is such an attempt. In cases where a combinator already has a common name, or where some literature—the Haskell standard library, array language discourse—has established a sufficiently reasonable candidate, I’ll use that; otherwise I’ll try to come up with something myself.

For some existing motivating examples, see:

If convenient, I will reproduce them shamelessly here.

Further, I’m indebted to the members of the APL Farm for additional motivating examples.

# A Practical Combinator Library

## identity

const identity = x => x


### A Motivation: No-Op

identity is easily understood as a no-op; it’s in the context of other functional programming techniques that the need for such a function arises. Whereas, in a less functional style, we might have an if statement combined with a mutating operation, when using techniques from functional programming we can easily find ourselves needing a function which means “do nothing”.

const maybeAbs = shouldNormalizeNegatives ? Math.abs : identity
return nums.map(maybeAbs)


## constant

function constant(x) {
return y => x
}


### A Motivation

The case for constant is similar to that for identity; in an imperative context, it is almost always superfluous. In a context making use of higher-order functions, it often has a role to play.

const maybeRedact = shouldRedactNames ? constant("**REDACTED**") : identity
return names.map(maybeRedact)


## compose

function compose(f, g) {
return x => f(g(x))
}


### A Motivation: Data Pipelining

The following example is restricted in scope, but only because for simplicity’s sake we’ve defined compose to take exactly two arguments. An industrial strength compose should probably take any number of functions, where the extension from 2 to $$n$$ is hopefully self-evident.

const cleanDatum = compose(removeUndefineds, coalesceNaNs)
const cleanedData = dirtyData.map(cleanData)



### A Motivation: Absolute Difference

Just as we can imagine a variadic compose, where compose(f, g, h) is equivalent to compose(f, compose(g, h)), and so on: we can also imagine that an industrial strength compose might return a variadic function, that is, that the result of compose should pass not just x to the innermost function but ...x, with all subsequent functions expecting a single argument.

function composeN(f, g) {
return (...x) => f(g(...x))
}
const absoluteDifference = composeN(Math.abs, _.subtract)
return absoluteDifference(9, 13) // => 4


## apply

function apply(f) {
return xs => f(...xs)
}


### A Motivation

When applied to a single argument, apply is redundant; apply(f)(x) is equivalent to f(x). When f is a function that takes multiple arguments, there’s often syntactic sugar to apply it to some variable xs that contains multiple values: as seen above, in Javascript, it’s the spread operator ... (there’s also the appropriately-named prototype method apply). In Janet, it’s splice, spelled ;, as in (f ;xs). The functional form apply, on the other hand, allows tacit forms to be used in a functional style.

const basesAndExponents = [[0, 1], [1, 2], [3, 5], [8, 13]]
return basesAndExponents.map(apply(Math.pow)) // => [ 0, 1, 243, 549755813888 ]


## flip

function flip(f) {
return (x, y) => f(y, x)
}


### A Motivation

We could imagine standalone functions to permute any number of arguments; in practice, three or more arguments that need to be rearranged should probably be bound to local variables and called explicitly. On the other hand, when two arguments need to be reversed, the effect is quite intuitive.

const exponentsAndBases = [[0, 1], [1, 2], [3, 5], [8, 13]]
return exponentsAndBases.map(apply(flip(Math.pow))) // => [ 1, 2, 125, 815730721 ]


## duplicate

function duplicate(f) {
return x => f(x, x)
}


### A Motivation

const square = duplicate(_.multiply)
return square(6) // => 36


## left

const left = (x, y) => x


## right

const right = (x, y) => y


## recombine2

function recombine(f, g, h) {
return x => f(g(x), h(x))
}


### A Motivation

recombine can be seen as a generic form for functions that need to reuse their arguments in more than one position.

We’ll appropriate the canonical example from demonstrations of J: the arithmetic mean of a sequence of values is the sum of those values divided by the length of the sequence.

const mean = recombine(_.divide, _.sum, _.size)
return mean([0, 1, 1, 2, 3, 5, 8, 13]) // => 4.125


### A Motivation: Min-max / Span of Values

const minMax = recombine(Array.of, _.min, _.max)
const spanOfValues = recombine(_.subtract, _.max, _.min)
const values = [1, 1, 2, 3, 5, 8, 13]

return minMax(values) // => [ 1, 13 ]
return spanOfValues(values) // => 12


### A Motivation: Plus or Minus

In the same way that we can imagine a variadic extension of the higher-order function produced by compose, we can imagine one for the higher-order function produced by recombine. It can be extended to pass ...x to both g and h, while preserving the existing behaviour.

function recombineN(f, g, h) {
return (...x) => f(g(...x), h(...x))
}
const plusOrMinus = recombineN(Array.of, _.add, _.subtract)
return plusOrMinus(13, 8) // => [ 21, 5 ]


### A Motivation: Splitting Arrays

In researching this example, I was astonished to learn that there’s no native way in the Javascript standard library to split an array at a given index.

const splitAt = recombineN(Array.of, _.take, _.drop)
return splitAt([0, 1, 1, 2, 3, 5, 8, 13], 5) // => [ [ 0, 1, 1, 2, 3 ], [ 5, 8, 13 ] ]


### A Motivation: Is Palindrome, Harshad Numbers, eachValueIsUnique

There’s a special case of of recombine that we might see cropping up fairly often: an operation between some value, and a function of that same value.

If we like, we could spell that thus:

x => f(x, g(x))


But we can also spell it as an application of recombine that has identity as either g or h. This has the same behaviour:

const isPalindrome = recombine(_.isEqual, _.reverse, identity)
return isPalindrome([0, 1, 0])

/**
* A harshad number is any number that is divisible by the sum of its digits.
*/
const isHarshadNumber = recombine(isDivisible, identity, sumOfDigits)

const eachValueIsUnique = recombine(_.isEqual, identity, _.uniq)
return eachValueIsUnique([0, 1, 1, 2, 3, 5, 8, 13]) // => false
return eachValueIsUnique([0, 1, 2, 3, 5, 8, 13]) // => true


Since there’s no great and widespread name for this special case, I’ll prefer to spell it in terms of recombine and identity.

### A Motivation: Percentage difference

The left and right functions are fairly specialized but can come in handy in the cases covered by combineN:

const percentageDifference = recombineN(_.divide, _.subtract, right)
return percentageDifference(12, 10) // => 0.2


As with duplicate: to go with left and right we can imagine functions that return the third, fourth, etc. argument that they’re given, but it might get unwieldy for functions of three arguments or more.

## under3

function under(f, g) {
return (x, y) => f(g(x), g(y))
}


### A Motivation

const isAnagram = under(_.isEqual, _.sortBy)
return isAnagram("live", "evil")


### A Motivation: areAnagrams

Because under applies the same g to each argument passed to the resulting higher-order function, it’s also fairly intuitive to generalize that function from 2 arguments to $$n$$.

function underN(f, g) {
return (...xs) => f(...xs.map(g))
}
const areAnagrams = underN(_.isEqual, _.sortBy)
return areAnagrams("live", "evil", "veil")

1. Javascript syntax is widespread and often-understood; unfortunately, the base Javascript ecosystem, by itself, is not great for functional programming. Many of its functions are either operators or object methods, that can’t easily be passed in to higher-order functions.

In these examples we’ll often make use of the lodash library to fill in some of these gaps.

2. The name for this combinator, which the literature calls 𝝫 and which J calls fork, is a novel suggestion of Nate Pinsky.

3. The name for this combinator, which the literature calls 𝝭, collides—unfortunately—with some existing nomenclature in J. J’s Under has an analogous function, in that it applies some g to one or more arguments and then applies some f to the results. The difference is that J’s then attempts to undo g by calling the so-called obverse of g on the result of the call to f.

The collision is regrettable, but nevertheless the most suggestive metaphor for 𝝭 seems to be that it calls f under g.

Built with Bagatto.