For most enterprise CDP teams, the first draft sets the tone for everything that follows. When it arrives in a clear and coherent form, the review tends to progress with a good measure of control. But when it arrives in fragments, which it often does, timelines tighten and pressure increases. For teams using EA, an AI agentic workflow platform for CDP and sustainability disclosures, that first version tends to arrive far more structured from the outset.

Until there’s a credible V1 in front of everyone, review can’t properly get going:

  • Legal needs to see the actual wording before forming a view on defensibility.
  • Risk looks for how a statement sits within the wider narrative, not as an isolated line item.
  • Finance will often read with an eye on how particular phrasing might be interpreted externally.
  • Sustainability is left trying to ensure that what appears in one section doesn’t contradict something said elsewhere.

When that baseline doesn’t exist, each function ends up responding to partial information. What should be a structured review instead becomes a series of reactions to fragments of input.

In many organisations, the assembly phase takes longer than anyone would like. This isn’t because the organisation lacks clarity about its stance, but because consolidating previous responses, updated data points and revising narrative into a single structured draft ultimately remains more manual than it needs to be.

“In many organisations, the assembly phase takes longer than anyone would like.”

Within voluntary disclosure teams, this drafting burden often sits alongside other disclosure responsibilities, from regulatory reporting to ratings questionnaires. In tandem, the underlying workflow pain is consistent across these disclosure boxes (even if the reporting lines and external visibility differ.)

The concern, understandably, is that accelerating this phase could compromise quality. In governance-heavy environments, it’s fair to say speed can feel like exposure. But that’s an assumption that deserves closer examination.


The Real Source of First-Draft Friction

In large CDP submissions, delays in the first draft rarely stem from uncertainty about substance. Teams know broadly what they intend to communicate. Instead, the friction lies in assembling that knowledge into a structured, defensible response — which is where teams using EA tend to see the most immediate shift.

In practice, the raw material for a CDP response rarely sits in one place:

  • Last year’s answers tend to live as portal exports and policies are filed elsewhere, whilst the underlying metrics sit in reporting systems owned by different teams.
  • Crucial explanatory narrative might be buried in an earlier questionnaire, a board pack, or an internal update.

Combined, this means that by the time drafting begins, a large portion of the effort involved has gone into locating and reconciling those pieces. Writing can only start once the retrieval work is done.

“The challenge lies in assembling that knowledge into a structured, defensible response.”

In most organisations, that consolidation ends up happening in Excel, regardless of what systems sit upstream. Even where questionnaires originate in structured portals, they are typically exported so that multiple contributors can work offline. That file becomes the working draft: it is emailed, annotated, approved and eventually copied back.

Rather than attempting to displace that reality, EA accelerates it. A questionnaire can be uploaded, structured into a first draft, exported again for offline collaboration, and re-imported once updated.

Alternatively, answers can be assigned internally within the platform, with approvals captured directly. Both paths are available, but the objective remains the same: to reduce manual reconstruction, not to enforce a new interface.

It’s common to discover multiple versions of last year’s “final” response, saved under slightly different filenames (many readers will be all too familiar with the pain of discovering ‘Excel _V4_final_final’), with no obvious indication of which of those versions was ultimately submitted. Sections drafted in parallel need to be reconciled later, and minor wording differences introduced early in drafting can appear again during review.

These might seem like minor inconveniences on an individual level but together as a whole they add up, meaning that the rework feels disproportionate.

And it doesn’t just happen once. The same pattern plays out across hundreds of questions. Even where the underlying position hasn’t materially changed, the act of rebuilding from scattered drafts and parallel edits takes considerable (and valuable) time.

Similar answers get rewritten from scratch because no one is fully confident they’re looking at the right starting point. Add multiple contributors into the mix before there’s a stable draft to work from, and version control becomes a labyrinthine task in its own right.


Compression Through Structured Reuse

Compressing the first draft begins by reducing reconstruction.

When prior responses and supporting materials are organised into a structured internal reference, as they are within EA, drafting shifts from searching to assembling.

Approved responses and supporting evidence become part of an internal knowledge base rather than remaining isolated within a single submission cycle. Subsequent questionnaires can draw from that validated material directly, reducing the need to reconstruct similar explanations each year.

In short: over successive disclosure cycles, approved responses accumulate into an institutional knowledge base, further reducing reconstruction each year.

“Compressing the first draft begins by reducing reconstruction.”

Instead of locating fragments across folders and exports, teams can fluidly and easily assess whether existing content still reflects current practice and where genuine updates are required.

Controlled drafting support can then generate a coherent baseline aligned to the current questionnaire. That baseline can be worked on in whichever environment suits the organisation.

Some teams choose to manage assignments and approvals inside the platform, using role-based sign-off and a visible audit trail. Others export the draft into Excel, circulate it through established email loops, and re-upload once approved. The flexibility is inherent and deliberate. Large organisations rarely change processes simply because a new tool suggests they should.

Reaching that baseline earlier shifts the tone of the whole process. Instead of reviewing something that is still moving underneath them, stakeholders across the board are able to work from a stable draft.

Conversations move from “is this even the right starting point?” to “is this the right way to say it?”:

  • Legal can read language in context rather than in isolation.
  • Risk can see how a statement sits within the wider submission.
  • Sustainability is no longer firefighting inconsistencies late in the cycle, but shaping alignment while there’s still time to do it properly.

Linking draft responses back to underlying evidence further supports defensibility. When wording is traceable to validated metrics or approved policies, reviewers naturally focus more on judgement than origin tracking. In this model, compression shortens the path from blank questionnaire to coherent draft. But it doesn’t alter who controls the final disclosure.


Risk Is Reduced When Friction Is Reduced

Whilst it’s reasonable and rational to question whether accelerating drafting could weaken oversight, in practice prolonged drafting often introduces its own risks.

When a coherent draft emerges late in the review cycle, it forces reviewers to work against tight timelines. When multiple versions circulate, confirming which edits have been incorporated becomes a task in and of itself.

All of this means that, as deadlines approach, attention shifts from material judgement to mechanical correction. And that’s a place where no one wants to be.

“Prolonged drafting often introduces its own risks.”

But if some of that mechanical effort is removed earlier, the pressure changes. Reviewers aren’t racing against the clock in a futile battle to reconcile versions, meaning that they can spend time on what actually matters. Legal can sit with the wording and consider how it might be read, while risk can step back and look at exposure in the round.

In short, senior stakeholders are able to engage with the substance of the response rather than chasing edits across competing drafts.

CDP sits in an interesting space. It isn’t statutory in the way regulatory reporting is, but it’s rarely treated lightly. Responses can be publicly visible, scores are compared, and wording can travel further than intended. That means any effort to move faster has to withstand scrutiny. Speed is only useful if the governance still holds.

What doesn’t change is accountability. The same people sign off and the same approval processes apply. Drafting support may help assemble the material more efficiently, but judgement remains where it has always sat. In practice, having a stable draft earlier tends to make those controls stronger, because reviewers are working from something coherent rather than trying to steady it at the last minute.


Enterprise Evidence

In EA enterprise pilots, the impact has been straightforward.. By organising prior responses and supporting material properly, teams have been able to produce a complete first draft in noticeably less time than in previous cycles.

Just as importantly, governance hasn’t been diluted: review still follows the same internal process, but it begins from a stable draft rather than from fragments.

Submission-stage automation depends on system access and technical scope. In some environments that integration is possible and further time can be saved but in others, it isn’t.

Even where submission remains manual, shortening the drafting and reconstruction phase has had a tangible effect on overall preparation time.

What’s been consistent across larger organisations is this: reducing the effort required to assemble the draft does not remove oversight. Instead, it simply allows reviewers to focus their time on judgement rather than mechanics.

A Feasible Adjustment, Not a Radical Shift

For teams responding to CDP each year, shortening the first draft doesn’t mean loosening governance: it just means being clear about where judgement actually belongs. Whilst the decisions, the accountability and the final sign-off remain internal, what changes is the amount of manual reconstruction required to get to a reviewable draft in the first place.

“For teams responding to CDP each year, shortening the first draft doesn’t mean loosening governance: it just means being clear about where judgement actually belongs.”

When that distinction is understood, the tension between speed and scrutiny tends to soften. The disclosure doesn’t become lighter. Indeed, if anything, it becomes more controlled because reviewers are engaging with substance rather than stabilising structure.

If you’re considering whether your own cycle could move faster without increasing risk, the most useful step is simply to see how that separation plays out in a live enterprise workflow.