Generic tools have gotten remarkably good. An off-the-shelf language model can read a document, summarize it, answer questions about it, extract structured data from it — things that would have required months of custom development just a few years ago.

And yet professionals in specialized fields keep hitting the same wall. The generic tool does 80% of the job and falls apart on the rest. Not because the underlying model is bad, but because the last 20% requires knowing things the model doesn’t know.

This is the price of specificity — and it’s also where the pricing power lives.

What Generic Gets Right (and Wrong)

A generic summarization tool can tell you what a document says. It can identify the key topics, extract the main points, answer straightforward questions about the content.

What it can’t do is tell you what matters given a particular professional context. It doesn’t know that in a certain type of transaction, one clause in an appendix matters more than the entire executive summary. It doesn’t know that a particular number is a red flag only when it appears alongside a certain other number. It doesn’t know what the industry considers normal versus unusual.

This isn’t a failure of the underlying model. It’s a knowledge problem. The generic tool was trained to be useful across all contexts, which means it’s excellent at the universal and blind to the specific.

The professional who knows the specific things can compensate manually — reading the document with expert eyes, knowing what to look for, flagging the things that matter. But that’s exactly what they were hoping the tool would do. The generic version pushes the hardest part back to them.

Specificity as Knowledge Encoding

Building a vertical tool is, in one sense, the act of encoding domain knowledge into a system that otherwise doesn’t have it.

This knowledge comes in several forms. There’s structural knowledge — understanding how documents in a given field are organized, what sections exist and what they mean, where the important information tends to live. There’s evaluative knowledge — what values, ratios, and terms are normal versus concerning in a given context. There’s relational knowledge — how pieces of information interact, when one finding changes the significance of another.

None of this is generic. It’s accumulated through years of working in a specific field, seeing patterns across many transactions, building calibration for what normal looks like. Encoding it into a tool doesn’t make the tool generic — it makes it specific in exactly the way the professional needs.

That specificity is hard to replicate without the domain expertise. It’s also hard to replicate without the deliberate effort to encode it. A general-purpose tool won’t acquire it automatically.

Why Vertical Pricing Is Higher

The pricing differential between generic and vertical tools isn’t arbitrary. It reflects something real about the value delivered.

A generic tool saves time on tasks the professional was already comfortable delegating — broad summarization, initial extraction, obvious pattern detection. The time saved is real, but the ceiling is moderate because the expert work still happens manually.

A vertical tool can take work out of the expert’s hands entirely. The domain knowledge is in the system, not in the human’s head being applied after the fact. The output isn’t “here’s the raw material for your expert judgment” — it’s “here’s the expert judgment, ready for your review.”

The jump in value is not incremental. It’s categorical. Which is why the pricing doesn’t just step up slightly — it steps up to match the value of the work actually displaced.

The Position This Creates

A vertical tool that genuinely encodes domain knowledge creates a position that’s hard to attack from the generic side.

The generic player could try to add vertical features, but they’re building without the domain expertise that produced the features in the first place. They’re likely to add surface-level specificity — the terminology, the templates — without the deeper evaluative knowledge that makes the tool actually useful.

The vertical player, on the other hand, has the knowledge and can keep deepening it. Each edge case encountered in production is a chance to improve the encoding. Each piece of feedback from domain experts is a refinement that widens the gap.

Generic gets broader over time. Vertical gets deeper. They’re optimizing for different things, which is why they don’t inevitably converge — and why there’s usually room for a vertical player even when a well-funded generic one exists.

The last 20% is hard. It’s also where the margin is.