6-week learning pilot; mined corrections from prior extraction runs; Co-ordinator proposed dictionary revisions for admin approval.
Operating layerOnboardingAI teammates
Five category pages for sector-specific case studiesView categories →
Case study·CRE advisory
Reusable field dictionary refinement
From one-off prompt fixes — to a reviewed dictionary that improves every run
9%
Correction rate on repeated fieldsDown from 18% before the pilot
“The system learned our review standards, not just our documents.”
UK CRE advisory firm (transformation/data team)·6 document families in scope across valuation, leasing and advisory mandates·6-week pilot·United Kingdom
01Pilot envelope
Pilot length6 weeks
First signal10 days
First ROI42 days
Team alongside4 seats · 2 colleagues
02What it owns
Reports toHead of Transformation, with the data owner and Salesforce admin as approving reviewers on every dictionary change.
Owns
- Canonical field dictionary — a single approved list of fields, definitions and value formats spanning the in-scope document families
- Correction backlog — reviewer mark-ups clustered by field and document family with the supporting examples attached
- Proposed revisions — drafts for admin review, each with the corrections that justified it and the runs it would have changed
- Salesforce mapping — every dictionary field tied to its target Salesforce field, with the admin approving any change to that mapping
- Per-run dictionary version log — every extracted value tagged with the dictionary version that produced it
Does not do
- Approving its own changes — every revision goes to the data owner and Salesforce admin
- Editing reviewer documents — surfaces corrections and aggregates them; never alters the original mark-up
- Salesforce write-back during the pilot — proposes the mapping; the admin signs off and runs the load
Done looks like
Reviewers see fewer repeat corrections each cycle, the data owner reads a short approval queue instead of investigating one-off prompt edits, and any extracted value can be traced to the dictionary version that produced it.
03The team
AI teammates2
IrisMaintains the canonical field dictionary across document families, drafts proposed revisions and surfaces them for admin approval before any run uses them.


TheoMines reviewer corrections from prior extraction runs, clusters repeat fixes by field and document family and writes the change rationale Iris attaches to each proposal.


Human team4
- Head of TransformationTransformation
- Data ownerTransformation
- 3 ReviewersAdvisory
- Salesforce administratorOperations
04Connected stack
05What it returned
9%Correction rate on repeated fieldsDown from 18% before the pilot
34Canonical fields approvedEach tied to a value format and a Salesforce target
6Document families mappedLease, valuation and advisory templates in scope
- Day 0Co-ordinator sessionHead of Transformation, data owner and Salesforce admin agree the in-scope document families and the approval rules for dictionary changes.
- Day 10First signalTheo finishes mining the prior extraction runs; the first cluster of repeat corrections is grouped by field and document family.
- Day 18Read-only proposalsIris drafts the first batch of dictionary revisions; the data owner reviews them in SharePoint without any run picking them up yet.
- Day 28First approved revisionThe data owner and Salesforce admin approve eleven canonical fields; the next extraction run logs which dictionary version it used.
- Day 42ROI reviewRepeat-correction rate measured on the agreed field set; sponsor signs off and approves a second document family entering scope.
06Related templates
↗Get started
Want this for your team?
Each design-partner pilot starts the same way: one workflow, the minimum useful context, and a first ROI signal measured in days.
