TDD: A Design Discipline, Not a Testing One
One of the most persistent misconceptions about Test-Driven Development is reducing it to a technique for writing tests.
The name itself is misleading: we hear “test” and think verification, coverage, quality assurance. Yet Kent Beck , its creator, has repeatedly stated: TDD is a design technique, not a testing technique. The tests are merely a byproduct, albeit a valuable one, of a process whose primary purpose is to guide the emergence of software architecture.
Confusing TDD with a testing strategy is like confusing the scaffolding with the building.
The Red-Green-Refactor Cycle
The red-green-refactor cycle embodies this design orientation.
Red Phase: Interface First
The red phase, writing a failing test, forces thinking about the interface before the implementation:
- How will I call this code?
- What data must I provide it?
- What should I get in return?
These questions, asked from the code consumer’s perspective, naturally produce more ergonomic APIs than those designed from inside the implementation. We don’t ask “how will I structure my internal data?” but “how does my collaborator want to interact with this module?”
The test becomes an exacting first client that shapes the public interface.
Green Phase: Resisting Over-Engineering
The green phase (making the test pass as simply as possible) seems naive but serves a precise objective: resisting the temptation of over-engineering.
We implement only what’s necessary to satisfy the specified behavior. No premature generalization, no speculative abstraction. The code remains minimal, anchored in concrete needs.
Refactor Phase: Pattern Emergence
It’s the refactor phase that then allows restructuring (extracting functions, introducing abstractions, eliminating duplications) but only when patterns become visible through multiple tests.
Design emerges from the accumulation of concrete cases rather than being imposed a priori.
Emergence as Philosophy
This emergence constitutes the philosophical core of TDD.
Rather than designing architecture in isolation, drawing UML diagrams, then implementing this predetermined plan, we let the structure reveal itself test by test.
Each new test exerts pressure on the existing design: if the test is hard to write, it’s a signal that the tested code is poorly decoupled, its responsibilities confused, its dependencies too rigid. Test difficulty becomes immediate feedback on design quality.
Experienced TDD practitioners recognize these frictions and use them as a compass to guide their refactorings.
Tests as Fossil Record
The tests resulting from this process have a particular quality: they document expected behavior rather than internal implementation. They form an executable specification of the system, expressed in terms of concrete use cases.
But their primary value remains what they provided during development: serving as leverage for decoupled, cohesive design adapted to real needs.
A test suite written after the fact may verify the same behavior, but it won’t have exerted this formative pressure on the architecture. TDD doesn’t produce better tests; it produces better design, of which the tests are the fossil record.
Want to dive deeper into these topics?
We help teams adopt these practices through hands-on consulting and training.
or email us at contact@evryg.com