Using AI as a quiet assistant in QA, not the editor‑in‑chief (2024)


Using AI as a quiet assistant in QA, not the editor‑in‑chief (2024)

Background

Teams were curious about LLMs but wary of hidden changes or confident‑sounding guesses. The request was simple: get value where it’s safe—spot patterns, surface likely issues—while keeping translators and reviewers in charge.

What I built

A helper that proposed checks based on patterns it noticed (inconsistent units, term drift, likely placeholder mishandling) and presented them as suggestions, never edits. Deterministic scripts still enforced structure; people made the final call.

A short trail in the report that showed where each suggestion came from so reviewers could accept or ignore quickly.

How we ran it together

Reviewers worked as before, with the assistant nudging risky lines to the top. PMs saw fewer cycles on the mechanical issues that used to slip through.

Outcome

Review moved faster on the right content, and teams felt comfortable that AI was there to help, not to rewrite their work or decide for them.

If I were starting this today

I’d expand the hints slowly and keep the explain‑first approach. Trust grows when people can see why a suggestion exists.