The Screening-Writing Gap
When you research an industry niche and find an “AI tool,” the first question is: what does it actually do?
In regulated document industries, there’s a consistent pattern. The tools that exist tend to fall into one category: screening and data retrieval. They aggregate databases, run queries, identify flags, and surface relevant records. What they don’t do is write the document. The actual report — the deliverable that a client pays for, the thing a professional spends hours producing — still comes out of a human’s head.
This is the screening-writing gap.
What Screening Tools Do
A screening tool for a technical document type might:
- Query federal and state regulatory databases for a given property or location
- Flag records that meet certain criteria (contamination history, permit violations, hazmat incidents)
- Generate a summary of what the databases show
- Structure findings according to the relevant standard’s categories
This is genuinely useful. It compresses hours of database lookups into minutes. It reduces the chance of missing a record. It gives the professional a foundation to work from.
But it does not produce the document. The professional still has to write the analysis, draw conclusions, make recommendations, apply professional judgment, and produce the narrative that constitutes the actual deliverable. The screening tool handles the research phase; the writing phase is untouched.
Why The Writing Layer Is Still Empty
Writing is harder to automate than retrieval for several reasons.
Retrieval has a ground truth: either the record exists in the database or it doesn’t. You can verify the output. Writing involves judgment: how serious is this finding? What does it mean for this specific site, given these specific conditions? Is this a recognized environmental condition or a historical anomaly? These questions require professional knowledge that’s hard to encode.
Writing also creates liability. If a report says a property is clean and it isn’t, someone is responsible. Professionals are understandably cautious about automating the layer where their professional license is on the line. Screening tools stay on the safe side of that line: they surface information, they don’t interpret it.
And writing is where the domain specificity lives. Every industry has its own vocabulary, its own standard sections, its own boilerplate language, its own required conclusions. A generic LLM can produce grammatical paragraphs. It can’t produce the specific language an experienced practitioner would use in section 7.4 of a Phase I ESA for a property with a historic dry cleaning operation nearby — not without significant domain-specific training and structure.
The Opportunity
The screening-writing gap is consistent across the regulated document categories worth looking at. In each case, the data layer has some tooling. The writing layer is either completely empty or served only by generic tools that professionals don’t actually trust.
This gap is real and stubborn precisely because it’s the hard part. Retrieval tools got built first because retrieval is tractable. Writing tools require solving the trust problem: the professional has to be comfortable putting their name on the output.
The path to solving the trust problem isn’t full automation. It’s augmentation — a tool that handles the structure, the boilerplate, the standard sections, and lets the professional focus on the judgment calls that actually require their expertise. A first draft that’s 70% right is dramatically more valuable than a blank page.
That’s the gap worth filling. Not retrieval — that’s largely done. The writing layer.