The Seam
Every system has seams. The places where one module ends and another begins. Where your code calls a library. Where your application talks to a database. Where your process hands off to someone else’s process. These boundaries look like implementation details, but they’re actually where most of the important engineering decisions get made.
Where Bugs Live
If you could map every bug you’ve ever encountered to a location in the system, you’d find that they cluster at boundaries. Not evenly distributed across the codebase, but concentrated at the seams – the places where one assumption meets another.
The function that parses user input and hands it to the business logic layer. The serialization boundary where an in-memory object becomes bytes on the wire. The point where your synchronous code calls an asynchronous service and has to deal with the response arriving later, or not at all.
These boundaries are where type systems get stretched, where error handling gets complicated, where the contract between “my code” and “their code” has to be negotiated in real time. A function operating purely within its own module can rely on invariants it controls. A function that crosses a seam has to defend against a world it doesn’t.
This isn’t a failure of engineering. It’s a fundamental property of systems. Anywhere two things meet, there’s a translation layer. Translation is where meaning gets lost.
The Assumption Gap
Every module makes assumptions. The database assumes queries will be well-formed. The API assumes authentication has already happened. The rendering layer assumes the data it receives has been validated.
Within a single module, these assumptions are usually consistent. The author who wrote the validation also wrote the code that depends on it. They share a mental model of what’s true at each point in the execution.
But at the seam between modules, assumptions collide. Module A assumes it’s sending UTC timestamps. Module B assumes it’s receiving local time. Both are internally consistent. The bug lives in the gap between them, in the place where neither module’s tests look, because each module’s tests only verify its own assumptions.
I’ve seen this pattern repeat endlessly: two well-tested, well-designed components that work perfectly in isolation and fail spectacularly together. The failure isn’t in the components. It’s in the seam.
Designing Good Seams
If seams are where problems concentrate, then designing good seams is one of the highest-leverage activities in software architecture. And yet, seam design is rarely discussed as its own discipline. We talk about API design, about module boundaries, about interface contracts – all of which are aspects of seam design, but we rarely name the underlying concern.
A good seam has a few properties:
Explicit contracts. Both sides agree on what’s being exchanged. Not just the types, but the semantics. Is this timestamp UTC or local? Is this string already escaped for HTML? Is this pointer owned or borrowed? The more explicit the contract, the fewer bugs can hide in the gap.
Validation at the boundary. Don’t trust incoming data just because it comes from a component you also wrote. Validate at the seam. This feels redundant – why would your own code send bad data? – but systems evolve. What’s true today about the caller might not be true after next month’s refactor. Boundary validation is insurance against your own future changes.
Clear error propagation. When something goes wrong at a seam, the error needs to travel in a useful direction. A library that swallows exceptions and returns a default value is hiding information at the worst possible moment. A service that returns a generic 500 error when the real problem is a malformed request is doing the same thing. Good seams make failures visible and attributable.
Minimal surface area. The smaller the seam, the fewer places assumptions can clash. A function that takes two arguments is a simpler seam than one that takes twelve. A message format with three fields is a simpler seam than one with thirty. Every additional element in the interface is another place where the two sides can disagree.
The Library Boundary
One of the most common seams is the boundary between your code and a third-party library. You didn’t write it, you don’t fully understand its internals, and its authors don’t know your use case.
The temptation is to use the library’s types and conventions throughout your codebase. Call its functions directly from your business logic. Let its data structures propagate through your layers. This feels efficient – why wrap something that already works?
The answer becomes obvious the first time you need to swap the library, or when it updates and changes its API, or when its error handling model doesn’t match yours and you’re catching exceptions in fourteen places instead of one.
A thin wrapper at the library boundary – a seam you design rather than one you inherit – gives you a single place to translate between the library’s world and yours. It’s boring code. Write it anyway.
The Process Boundary
Seams aren’t just in code. They exist between processes, between teams, between organizations.
The handoff between “development” and “deployment” is a seam. The handoff between “your service” and “their service” is a seam. What happens when their service is slow? When it’s down? When it starts returning data in a slightly different format because they shipped a new version without telling you?
These process-level seams have all the same properties as code-level seams. They cluster bugs. They hide assumption gaps. They benefit from explicit contracts. And they’re often designed with even less rigor, because they span organizational boundaries where no single person has full visibility.
The Testing Implication
If seams are where bugs live, then seams are where tests should concentrate.
Unit tests verify that components work correctly in isolation. Integration tests verify that components work correctly together – which means they’re testing the seams. The common complaint that integration tests are harder to write and more fragile is exactly right, and it’s exactly the point. The seams are harder. Multiple sets of assumptions are in play simultaneously.
The most effective testing strategy I’ve seen doesn’t try to achieve uniform coverage. It tests the internals enough to catch regressions, then concentrates serious effort at the boundaries. Every public API gets input validation tests. Every external service interaction gets failure-mode tests. Every data format boundary gets round-trip tests.
It’s remarkable how often test suites grow large with internal unit tests while the seams – the places where things actually break – remain barely covered.
Architecture Is Seam Design
Here’s the thesis: good architecture is, at its core, the design of good seams.
The modules matter, of course. You need clean implementations, clear responsibilities, well-chosen data structures. But two mediocre modules with a well-designed boundary between them will cause fewer problems than two brilliant modules with a sloppy one.
The seam is where your assumptions end and someone else’s begin. It’s where your control ends and uncertainty starts. Designing that boundary well – making the contract explicit, the surface area small, the errors visible, the validation thorough – is the most defensive and the most productive thing you can do.
Build the modules. But design the seams.