Combinatory Completeness
S and K suffice for everything.
You have learned six named combinators: I, K, S, B, C, W. Each one captures a useful pattern -- identity, constant, distribution, composition, flip, duplication. It would be natural to assume that a rich library of combinators is needed to cover all of computation.
The truth is far more surprising. You do not need six. You do not need three. Two combinators -- S and K -- are enough to express every computable function. Every lambda term, no matter how complex, can be translated into an expression using only S and K. The identity combinator I, which seems as basic as anything could be, is itself derivable from S and K.
Two fixed patterns, applied to each other in different arrangements, generate everything. This is combinatory completeness.
Two Are Enough
A set of combinators is complete if every lambda term can be translated into an equivalent expression using only those combinators.
The combinators S and K form a complete basis:
- K =
λx.λy.x(returns first argument, discards second) - S =
λx.λy.λz.x z (y z)(distributes third argument to first and second)
From S and K alone, you can build I, B, C, W, and every other combinator. More than that: any lambda term whatsoever -- booleans, numbers, recursion, data structures -- has an equivalent S/K expression.
The translation procedure is called bracket abstraction. It systematically eliminates lambda bindings, replacing them with combinations of S and K.
Deriving Identity
The identity combinator I = λx.x seems as basic as anything could
be. Surely it must be a primitive?
It is not. I can be defined as S K K.
Recall:
- S =
λx.λy.λz.x z (y z)-- distributes z to x and y - K =
λx.λy.x-- returns first argument, discards second
When we write S K K, we are applying S to two copies of K. The result is a combinator that, given any argument, returns that argument unchanged -- exactly what I does.
This is the first hint of completeness: even the simplest combinator is already contained within S and K.
Verify S K K
We claimed I = S K K. Let's verify it by tracing the reduction.
Recall how S works:
S x y z = x z (y z)
S takes three arguments and distributes the third to the first and second.
Now apply S K K to an argument x:
- S K K x -- S receives K, K, and x
- = K x (K x) -- S distributes x to both copies of K
- = x -- K x (K x): K takes two arguments and returns the first, so K x (K x) = x
The result is x -- the input returned unchanged. S K K is the identity function.
What Completeness Means
When we say S and K are a complete basis, we mean something precise:
For every lambda term M, there exists an S/K expression E such that E behaves identically to M on all inputs.
This is not just about a few useful functions. It covers:
- Every Church boolean (TRUE, FALSE, AND, OR, NOT)
- Every Church numeral (ZERO, ONE, TWO, ...)
- Every arithmetic operation (ADD, MUL, EXP, ...)
- Every data structure (PAIR, FST, SND, lists)
- Every recursive function (via the Y combinator)
- Every lambda term that can be written, period
The claim is universal: if you can write it in lambda calculus, you can write it with S and K.
Note what completeness does NOT mean: it does not mean we can decide whether two S/K expressions are equivalent. That problem is undecidable, just as it is for lambda terms.
The Translation Idea
Saying "every lambda term has an S/K equivalent" is a strong claim. How is it proved? By giving a mechanical procedure -- bracket abstraction -- that translates any lambda term into S and K.
The core idea: lambda calculus has three forms of expression:
- Variables:
x - Applications:
M N - Abstractions:
λx.M
Variables and applications have straightforward S/K translations.
The challenge is abstractions -- eliminating the λ.
Bracket abstraction handles λx.M by cases:
- If x does not appear in M, use K:
λx.MbecomesK M(M ignores its argument, which is exactly what K does) - If M is just x, use I:
λx.xbecomesI(which is S K K) - If M is an application
P Q, use S:λx.(P Q)becomesS (λx.P) (λx.Q)and then translate the sub-abstractions recursively
The procedure always terminates: each recursive step operates on smaller terms.
Translate Identity
The simplest non-trivial translation: λx.x -- the identity
function.
Using bracket abstraction, λx.x falls into the base case where
the body is just the bound variable itself. The rule says: replace
λx.x with I.
And since I = S K K, the identity function translates to an expression using only S and K.
This is the simplest example of bracket abstraction at work, but the same procedure handles any lambda term, no matter how complex.
Why Remarkable
Lambda calculus itself is already minimal: one binding mechanism
(λ), one computation rule (beta reduction). Yet even lambda
calculus uses an unbounded supply of variable names.
Combinatory logic goes further. S and K are fixed, closed terms
-- they contain no variables, no binding, no λ. They are two
specific patterns that never change. And yet, by applying them to
each other in different arrangements, every computable function can
be expressed.
Consider what this means:
- Church booleans: TRUE, FALSE, AND, OR, NOT -- all expressible as S/K combinations
- Church numerals: every number, expressed as patterns of S and K
- Arithmetic: ADD, MUL, EXP -- all of arithmetic, from two combinators
- Recursion: the Y combinator itself has an S/K expression
- Data structures: pairs, lists, trees -- all from S and K
Two building blocks. All of computation.
Minimalism
Two systems, two approaches to the same goal:
Lambda calculus starts with variables, abstraction (λ), and
application. It has one computation rule: beta reduction
(substitution). From this single operation, applied repeatedly,
all of computation emerges.
Combinatory logic starts with S, K, and application. It has no variables and no binding. Computation proceeds by rewriting according to the fixed rules of S and K. From two combinators, all of computation emerges.
These systems are equivalent in expressive power. Every lambda term can be translated to an S/K expression (bracket abstraction), and every S/K expression can be translated back to a lambda term (by expanding the definitions of S and K).
Both are Turing-complete: they can compute anything that any computer can compute. The lambda calculus achieves this through one flexible mechanism (substitution with binding). Combinatory logic achieves it through the interaction of two fixed patterns.
Ready to test your understanding?
Bitwit uses spaced repetition to help you truly master concepts like this—not just read about them. Each card generates with different values, so you can't just memorize answers.
Practice Combinatory Completeness →