Total and Partial Functions: The Type’s Promise
A total function is one that terminates and produces a valid value for every input in its declared domain.
The Signature’s Promise
If a function has signature , totality guarantees that for each inhabitant of , we will obtain an inhabitant of : without exception, without infinite loops (the function never terminating), without crashes (the function lying, in a sense).
A partial function, on the other hand, only keeps this promise for a subset of its domain: it may diverge, raise an exception, or return null for certain inputs.
The distinction seems academic, but it touches the heart of what it means to trust a type signature.
Disguised Partial Functions
Partial functions are omnipresent in everyday programming, often disguised as total functions:
- lies about its type: it fails on the empty list
- promises a number but may return
NaNor raise an exception dict.get(key)suggests we’ll get a value, but what happens if the key is absent?
These functions violate the implicit contract of their signature: the type claims to cover all cases, but the implementation excludes some.
The developer must know which cases are problematic and handle them: knowledge that is not encoded in the type system. This knowledge adds to the mental overhead and makes the program more fragile.
Making a Function Total
Making a partial function total consists of aligning its type with its actual behavior. Two main strategies are available:
-
Restrict the domain: rather than , we write , shifting the proof responsibility to the caller.
-
Expand the codomain: explicitly admits the possibility of absence.
In both cases, the type becomes honest: it promises exactly what it can deliver.
Alexis King’s article Parse, Don’t Validate favors the first approach; traditional functional languages often employ the second via Option or Either. At evryg, we speak of “constraining upstream” (at the level of the function’s argument types - the domain) or “expanding downstream” (at the level of the return type - the codomain).
The Logical Dimension: Curry-Howard
From the perspective of Curry-Howard, this distinction between total and partial functions takes on a logical dimension.
- A total function constitutes a constructive proof that ” implies ”: for any proof of , we can construct a proof of . This proof is constructed by the very existence of the function, correctly implemented.
- A partial function proves no such thing: it only asserts that sometimes, from certain s, we can obtain a . The rest of the time, this function lies.
Proof assistants like Coq or Agda require totality by default, because a non-terminating function would allow “proving” anything, including contradictions.
The termination checker verifies that each recursion progresses toward a base case, guaranteeing that the proof-program always completes.
Toward Totality in Practice
In practice, absolute totality is an ideal rarely achieved in mainstream languages. Sources of partiality are numerous:
- Side effects
- Exceptions
- Network timeouts
- Memory exhaustion
These cases escape conventional types.
However, striving toward totality considerably improves code robustness. Each partial function converted to a total function is one fewer trap for the future developer, and one fewer 3AM alert avoided.
Unexpected production exceptions have a human cost: on-call duty, burnout, turnover. The best engineers leave when they spend more time firefighting than building. Total functions mean fewer surprises, calmer on-call rotations, and a team that stays.
Types become reliable contracts rather than optimistic approximations. The compiler, armed with honest types, can finally fulfill its role as verifier: not of mere syntax, but of semantic properties of the program expressed as static types.
Want to dive deeper into these topics?
We help teams adopt these practices through hands-on consulting and training.
or email us at contact@evryg.com