AEO Agency Onboarding Checklist: The 14-Day Client Onboarding Flow That Sets Up Citation Wins

You signed a new AEO retainer last Thursday. Kickoff is Monday, and you have fourteen working days before the client’s CMO expects something that looks like a baseline document. The renewal at month three is not decided at month three. It is decided in the next two weeks.

The buyers already shortlisted your client’s vendors on Day One. Per the 6sense 2025 B2B Buyer Experience Report, B2B buyers ultimately purchase from a Day-One shortlist vendor 85-95% of the time, and over 80% initiate outreach themselves. Your client’s prospects are doing the same to your client right now. This onboarding period is the only window to influence both the mental model the client builds about your agency, and the model their buyers build about them.

The timeline math is in your favour. Discovered Labs puts first AI citations at 2-6 weeks for properly structured content, and HubSpot’s AEO case study reports a 1,850% lift in qualified leads from AI sources. AEO is a 30-day measurable bet — which is why agencies that drag onboarding into month two lose the renewal before it begins.

What follows: the day-by-day flow, the tools you provision, the five mistakes that quietly kill retainers between Day 1 and Day 90, and the exact contents of the Day 14 report. Every deliverable is a real file you hand the client.

Start your 14-day free trial

Ship the Day 14 baseline without stacking three tools

Growth plan free for 14 days. Five AI engines. Full agency dashboard.

Start free trial

Three numbers that should sit at the top of every onboarding plan

Internalise these before drafting the kickoff agenda. They turn onboarding from a charm offensive into a forensic exercise.

95%
of B2B deals close with a Day-One shortlist vendor (6sense 2025)
14-30d
to first measurable AEO lift on properly structured content (Discovered Labs)
~38%
annual SEO retainer churn, with the cliff between months 3-6 (First Page Sage, Focus Digital)

Read them as one sentence. The buyer’s shortlist locks fast, the intervention shows lift inside the first month, and almost four in ten retainers churn each year with the cliff sitting exactly where Day-One promises run out of road. Every day you delay the baseline is a day the client builds a story about your agency from silence.

Conductor adds the demand-side number: 98% of CMOs now prioritise AEO and 94% are increasing AEO spend this year (Conductor). Your client’s CMO is being asked by their board what AEO is producing; the document they reach for is whatever you handed them on Day 14. If that document does not split results by engine, the renewal will not go well.

WATCH FOR — the kickoff-deck trap

The Day-14 baseline is not a status update and it is not your kickoff deck. It is the document the CMO forwards internally to justify keeping the retainer at month three. Write it for that audience — the executive who was not on the kickoff call.

The 14-day flow, grouped into four phases

Each phase carries a research-backed reason to exist and a deliverable file. Saved in a shared repo, they become the audit trail you reference at every quarterly review.

Phase one · Days 1-3

Scope capture and the five-engine baseline

The first three days are not the kickoff call — they are the kickoff call plus the data collection that lets every later phase be falsifiable. Skimp here and the rest of the flow inherits the gap.

  • Day 1 — Discovery call. 60-90 minutes plus an async questionnaire. Capture brand variants (legal name, DBA, common misspellings, parent company), 10-30 target queries, 3-5 named competitors, 2-3 buyer personas, voice rules, and compliance constraints (HIPAA, FINRA, GDPR). Deliverable: client_scope.md in the project repo. Per Conductor, 98% of CMOs prioritise AEO — your scope doc gets re-shared internally, so if it reads thin, you look thin.
  • Day 2 — Query selection finalised. Pick the 10 highest-intent commercial queries where (a) at least one named competitor is currently cited in at least two of the five engines, and (b) the client has an existing landing page ranking in Google’s top 30 for the same query. Those two filters give you queries where citation lift is structurally possible within 30 days.
  • Day 3 — Five-engine baseline run. Push the 10-30 queries through ChatGPT, Perplexity, Gemini, Claude, and Google AI Overviews. Record cited / not-cited per engine per query. Do not average. Per Profound’s 27M-citation analysis, owned content is just 4.3% of citations on category prompts — engines disagree on which earned sources matter, which is exactly the gap you need to surface. Deliverable: baseline_engines.csv with query, engine, cited (Y/N), competitor cited, and source URL.
Phase two · Days 4-7

Authority, earned-mentions, and the Reddit footprint

The five-engine matrix tells you what the engines see. The authority and community work tells you why. By Day 7 you should know whether the client’s problem is structural (low domain authority, missing earned mentions) or distributional (no Reddit or industry-forum presence on the queries that matter).

  • Days 4-5 — Authority baseline. Pull referring domains from Ahrefs or Semrush. Pull branded mentions on the open web. Per ZipTie, domain authority outweighs schema by roughly 3.5:1 in the citation models. Per RivalHound and Ahrefs data via Search Engine Journal, brand mentions correlate 0.664 with AI visibility versus 0.218 for backlinks. Deliverable: authority_baseline.md with referring domains, branded mentions, and the gap to each tracked competitor.
  • Day 6 — Competitor citation gap. For every query in the matrix, log which competitor is cited where the client is not. This becomes the “top 3 missed queries” section of the Day 14 report — the section the CMO will scan first.
  • Day 7 — Reddit and community footprint. Search Reddit, Quora, Stack Exchange, and GitHub Discussions for the brand and the top three competitors. Per Discovered Labs, Reddit drives 46.7% of Perplexity’s top-10 citations. Per Semrush’s 248K-post study, 80% of cited Reddit posts have fewer than 20 upvotes — structure beats virality. Deliverable: reddit_footprint.md mapping where the brand appears, where competitors appear and the brand does not, and five candidate threads to engage authentically.
Phase three · Days 8-11

Content audit and the GA4 attribution layer

This is the phase that determines whether the citations your team wins in weeks 3-8 actually show up in the client’s GA4. Skip the attribution work and your wins land in Direct or (not set), the client credits them to brand marketing, and the renewal conversation goes sideways.

  • Days 8-9 — Content structure audit. Audit the top 10 organic landing pages for chunk length, heading hierarchy, FAQ schema, inline citations, and attribute-rich Product or Review schema. Per Frase, FAQ schema is 3.2× more likely to surface in AI Overviews. Per Am I Cited, 100-150 word chunks earn the highest citation rate. Per Frase’s coverage of inline-citation research, content that names sources inline shows up to 40% higher citation frequency. Deliverable: content_audit.csv with one row per page and a 0-3 score per dimension.
  • Days 10-11 — GA4 attribution layer. Build a custom AI Referrals channel group capturing chatgpt.com, perplexity.ai, gemini.google.com, claude.ai, copilot.microsoft.com, you.com, phind.com, plus Atlas, Comet, and Brave AI-browser referrers. Add a Shadow AI exploration segment for Source = (direct) + deep-content landing pages + above-average time on page. Per Coalition Technologies, only 0.5% of ChatGPT-driven traffic shows as organic by default. Yotpo’s tracking guide is the cleanest playbook for the regex strings. Deliverable: ga4_ai_channel_setup.md with regex strings, screenshots, and the saved Exploration query.
Phase four · Days 12-14

Quick-win restructure and the Day 14 baseline report

The closing phase exists for one reason: you need a tangible artifact and a measurable target before the kickoff month ends. One restructured page beats ten superficial edits, and the report is the only document the CMO will remember six weeks from now.

  • Days 12-13 — One quick-win restructure. Pick the lowest-scoring page from the Day 8-9 audit that already gets some Google traffic. Restructure to 100-150 word chunks, attribute-rich schema (Product or Review where applicable, FAQPage where the content fits), inline citations, and answer-first sentences under each H2. Per Ahrefs, 44.2% of citations come from the first 30% of content. Deliverable: quickwin_page_v2.md with before/after screenshots, schema diff, and chunk-length analysis.
  • Day 14 — Client baseline report. PDF plus a walkthrough call. Five sections: per-engine baseline citation rate, top three missed queries, 30-day measurable target per engine, competitive context, and the GA4 attribution diff. Per Seer Interactive, AIO-cited brands earn roughly 120% more organic clicks per impression on the same query — that is the headline you frame the renewal around. Deliverable: day14_baseline_report.pdf.

Treat each deliverable as a versioned artifact in a shared client folder. When the renewal call comes at month three, the conversation is grounded in eight artifacts the CMO has already forwarded internally — not in your slide deck.

The tooling map: what you actually provision in onboarding

Three tiers. Tier 1 is mandatory. Tier 2 is benchmarking depth you provision when the retainer can pay for it. Tier 3 is the stack the agency already owns and needs to re-purpose.

Tier 1 — Citation tracking (mandatory)

You need one tool that tracks all five engines daily and produces a white-label PDF for the Day 14 report. GenPicked Growth at $197/mo covers ChatGPT, Perplexity, Gemini, Claude, and Google AI Overviews in one dashboard, runs the ACS formula across them, and ships a white-label PDF on the Growth tier. Otterly’s Lite plan at $29/mo for 15 prompts is the cheap fall-back if you are running a single-client pilot before committing to a Growth seat. Pick one and ship the Day 14 baseline rather than evaluating three in parallel.

Tier 2 — Benchmarking depth (when retainer > $2K/mo)

ToolEntry tierEngines / scope
Profound$99/mo Starter; $399/mo Growth; $2K-5K+/mo EnterpriseChatGPT only at Starter; sales-led; 8-category citation taxonomy
Peec AI$95 / $245 / $495 per month3 models base; Claude / Gemini / DeepSeek / Grok as add-ons
Scrunch AI~$100/mo entry; ~$300/mo GrowthChatGPT only at entry; 8 models at Growth; AXP reformatting
AthenaHQ$295 / $545 / $2K+ per month8 LLMs; autonomous-agent stack; credit-based

Pricing for Profound, Peec, Scrunch, and AthenaHQ is sourced from the Trakkr review of Profound, the Peec AI pricing page, and the AthenaHQ vs Scrunch comparison. Sales-gated tiers move — verify before you quote a client. The provisioning rule of thumb: GenPicked Growth plus the Ahrefs or Semrush seat the agency already owns gets you to a defensible 14-day baseline. Profound, Peec, and AthenaHQ get added in month two when the retainer is paying for benchmarking depth that is not required to ship the Day 14 report.

Tier 3 — The stack you already own

Ahrefs or Semrush gives you the referring-domains and brand-mentions report you need on Days 4-5. The fix is not buying more tools, it is wiring the report you already pull into the AEO baseline rather than running it in parallel. Most agencies discover during onboarding that they have been paying for the brand-mentions report and never opening it. Days 4-5 is when that ends.

Start your 14-day free trial

Run the five-engine baseline from a single dashboard

Growth plan free for 14 days. Five AI engines. Full agency dashboard.

Start free trial

Five onboarding mistakes that quietly kill the retainer

These are the patterns we see surface in month-three renewal conversations. Each one is preventable inside the 14-day window if you treat the onboarding as a forensic exercise rather than a charm offensive.

  • Single-engine measurement. Averaging across engines hides the engine-specific delta where the client has the biggest opportunity. ChatGPT favours Wikipedia (~48% of citations), Perplexity favours Reddit (46.7% of top-10), Claude is precision-driven and authority-weighted. Per Discovered Labs, a single “AI visibility score” blurs the picture and kills the renewal at month three when the client asks why Claude is not citing them and you cannot answer.
  • Selling llms.txt as a Day-1 win. Per SE Ranking’s analysis of nearly 300,000 domains (original study), there is zero correlation between llms.txt presence and AI citation frequency, and the machine-learning models actually got more accurate when llms.txt was removed as a feature. Adding the file is fine future-proofing — framing it as the early win sets the client up to discover the lever does not move.
  • Generic schema. Per Growth Marshal’s 730-citation analysis, generic schema (Article, Organization, BreadcrumbList) earned a 41.6% citation rate, no schema earned 59.8%, and attribute-rich Product or Review schema with populated pricing, ratings, and specifications earned 61.7%. Half-implemented schema underperforms no schema. Populate fully or skip the schema work in onboarding.
  • Conflating Google rank with AI citation. Per Ahrefs’ analysis of 4M AIO URLs across 863,000 keywords, just 38% of AIO-cited pages also rank in Google’s top 10 for the same query — down from 76% seven months earlier. AI Overviews increasingly draw from fan-out queries and out-of-SERP sources. If your baseline only checks “is the client in Google top 10?”, you will declare AEO won when the client is invisible to AIO.
  • Skipping attribution recovery. Without the Day 10-11 GA4 channel-group fix, the citations your team wins in weeks 3-8 land in Direct or (not set). Per Coalition Technologies, AI-browser referrers strip headers and only roughly 0.5% of ChatGPT traffic is classified as organic. Yotpo’s tracking guide is the cleanest single playbook for the regex setup. Skip this and the client sees a Direct-traffic spike at month two, credits it to brand marketing, and concludes AEO is unproven.
WATCH FOR — how the five mistakes compound

Mistakes 1, 2, and 5 cost you the renewal narrative. Mistakes 3 and 4 cost you the actual citation lift. You need to avoid all five. The good news: the 14-day flow above prevents each one by construction.

What the Day 14 report actually contains

Five sections, in order, none of them aspirational. The whole document fits in five to seven pages. If you find yourself drafting a tenth page, you are writing the month-1 plan, not the baseline.

  1. Per-engine baseline citation rate across the 10-30 target queries (for example: ChatGPT 2/30, Perplexity 5/30, Gemini 1/30, Claude 0/30, AI Overviews 3/30). One row per engine, never a single averaged number.
  2. Top three missed queries — highest commercial intent where competitors are cited and the client is not. Name the competitor each time.
  3. 30-day target — explicit, measurable, per-engine. “By Day 44, lift Claude from 0/30 to 3/30, ChatGPT from 2/30 to 5/30, attribute-rich schema live on top three pages, one earned Reddit thread.” If the client cannot copy this sentence into a calendar reminder, you have not written a target.
  4. Competitive context — per Seer Interactive, AIO-cited brands earn roughly 120% more organic clicks per impression on the same query. Frame this as “what we are playing for.”
  5. GA4 attribution diff — what was being missed before the channel-group fix, what is now captured, and what percentage of last month’s “direct” traffic the new segmentation pulls into AI Referrals.

Do not include a “what we will do next” laundry list. That belongs in the month-1 plan, not the Day-14 baseline. The baseline is the snapshot the CMO will compare against on Day 44, on Day 74, and at every quarterly review for the life of the retainer. Keep it clean.

If you ran the flow above honestly, you now have eight artifacts in a shared folder, one quick-win page restructured, an attribution layer that finally credits AEO traffic correctly, and a per-engine target for the next 30 days. The renewal at month three becomes a comparison conversation against artifacts the CMO already has — which is exactly the conversation you want to be having.

Joseph K. Banda

Co-Founder, GenPicked

Building the AEO platform for marketing agencies. Helping agency owners get their clients cited by ChatGPT, Perplexity, Gemini, Claude, and Google AI Overviews — and prove it with data.

Credentials:

Co-Founder, GenPicked, AEO / GEO / AI Visibility platform for agencies, ACS (AEO Citation Score) framework architect

Frequently Asked Questions

How long until the client sees the first measurable AEO lift?

Practitioner timelines from Discovered Labs put first AI citations at 2-6 weeks for properly structured content. Set client expectation at 30 days for the first re-baseline; aim for the first 2-3 net-new citations between days 21-28. Anything earlier is luck; anything after day 45 means the Day 8-9 content audit missed something. HubSpot's published AEO case study reports a 1,850% lift in qualified leads from AI sources over a longer horizon — so the first 30 days are the leading indicator, not the headline.

Which 10 queries should I onboard with on Days 1-2?

Pick the 10 highest-intent commercial queries where (a) at least one named competitor is currently cited in at least 2 of the 5 engines and (b) the client has at least one existing landing page that ranks in Google's top 30 for the same query. Those two filters give you queries where citation lift is structurally possible inside 30 days. Add 10-20 more secondary queries once the baseline is shipped — do not start with 50.

Should I baseline all five engines on Day 1 or stagger them?

All five on Day 3 once scope is locked. Engines behave differently — per Discovered Labs, ChatGPT favours Wikipedia, Perplexity favours Reddit, Claude is precision-driven, and the engine where the client lags hardest is the engine where you have the biggest delta to close. Skip an engine on Day 1 and you discover the gap at month 3 when it is too late to set expectations. Multi-engine baseline is the single onboarding decision that protects the renewal.

Is FAQ schema worth the implementation effort during onboarding?

Yes, but only at full implementation. Per Frase's research, FAQ schema makes pages 3.2× more likely to appear in Google AI Overviews. But per Growth Marshal's 730-citation analysis, generic schema underperforms no schema (41.6% vs 59.8% citation rate). Implement attribute-rich Product, Review, or FAQPage schema fully on the Days 12-13 quick-win page, or skip the schema work entirely. Half-implemented schema is the worst option.

How do I baseline a client with no domain authority?

Do not fight the 3.5:1 schema-vs-DA ratio (ZipTie). Spend the 14 days on (a) 100-150 word chunked content on existing pages, (b) earned-mention prospecting in Reddit and 2-3 industry publications, (c) attribute-rich schema only on commercial pages, and (d) the GA4 attribution-layer fix. Skip technical-SEO-only deliverables for low-DA clients during onboarding — they will not move the needle inside 30 days, and you need a measurable signal by Day 44.

What if the client has compliance constraints (HIPAA, FINRA, GDPR)?

Capture them on Day 1 in client_scope.md and bake them into the content audit on Days 8-9. The 100-150 word chunk pattern is compatible with all three frameworks; the inline-citation pattern is actively helpful for FINRA suitability rules. The mistake to avoid is letting compliance review block week-3 content shipping — get pre-approval of the chunk pattern itself in week 1, not page-by-page in week 3 when it is too late to ship.

Should I include a Reddit engagement plan in the onboarding?

Yes — the Day 7 footprint map — but actual engagement is week-3+ work, not week-1 work. The Day 7 deliverable is reddit_footprint.md (where the brand appears, where competitors appear, where the gap is). Authentic engagement against that map starts after Day 14. Per Semrush's 248K-post study, upvotes do not drive citations — topical clarity does — so plan for value-add comments in active threads, not viral posts.

How do I price the 14-day onboarding inside the retainer?

Two patterns work. (a) Front-load the retainer: charge 1.5× the standard monthly rate for month 1 to cover onboarding labor (audit, GA4 setup, baseline report). (b) Charge a separate flat onboarding fee of $1,500-3,500 plus the standard monthly retainer. Pattern (a) closes faster and keeps the contract structure simple; pattern (b) protects the retainer rate from being benchmarked against the onboarding cost in future renewal conversations.

What does the Day-14 client report actually need to contain?

Five sections: (1) per-engine baseline citation rate per query, (2) competitor benchmark, (3) top 3 missed queries with highest commercial intent, (4) 30-day target with measurable per-engine citation goal, (5) GA4 attribution diff (what was being missed before, what is now captured). Per Conductor's AEO/GEO report, 98% of CMOs prioritise AEO — the report gets re-shared internally, so write it for the CMO, not for the marketing manager on the kickoff call.

Do I need GenPicked, Profound, and Peec all running during onboarding?

No. Tier 1 is one citation-tracking tool covering five engines — GenPicked Growth at $197/mo or Otterly Lite at $29/mo as the cheap fall-back. Tier 2 (Profound, Peec, Scrunch, AthenaHQ) is benchmarking depth you provision when the retainer can pay for it — typically >$2K/mo client retainers. Stacking tools during onboarding burns budget on overlapping coverage. Pick one Tier 1 tool, ship the Day 14 baseline, then add benchmarking depth in month 2.

Get Your Brand's AEO Score

See how your brand is performing in AI search with our free AEO audit.

Start Your Free Audit
#aeo#agency-ops#onboarding#client-onboarding#retainer#14-day-flow#five-engine-baseline#ga4-attribution