It is Monday morning, QBR week. You open your client's AI Citation Report. There is an overall score, four engine subscores, a query matrix, a gap list, a strength list, a competitor cluster, a sentiment word cloud. You stare at it. You screenshot the cover number, paste it into the deck, and write "your AI visibility went up 3 points this month." Your client reads it and asks, again, what they are paying for.
This is the agency's reading-skills problem, not a tooling problem. Conductor's State of AEO/GEO Report shows 56% of CMOs made significant AEO investments in 2025 and 94% plan to increase that spend in 2026. The dashboards exist. The data exists. What the agency owner needs is a repeatable 30-minute reading method that produces three concrete actions, every month, for every client. Not "your score went up." Instead: "this week we publish X, this week we pitch Y publication, this week we restructure Z page."
This post is that method. The same five-section read your account manager can run on any client, in any vertical, on the GenPicked dashboard at /reports or any AEO tool you happen to be using. Read the report in five passes, in order, and you walk out with the three-bullet action list before your coffee is cold.
The dashboard trap (why most agencies misread the report)
Three reading mistakes show up in nearly every agency call I sit in on this quarter. The first is averaging engines. The single number on the cover hides the entire strategy. Per Loamly's benchmarks, ChatGPT and Gemini agree on cited brands only 19% of the time. An averaged "AI visibility" number throws away 81% of the strategy signal. The second is reading single-day noise. A 41 to 38 day-over-day move is statistical wobble; ACS is built on a 30-day rolling, decay-adjusted window for exactly this reason. The third is treating the gap list as the entire output. Strengths drive the revenue lines. Per Seer Interactive's analysis of 3,119 informational queries across 42 organizations and 25.1M impressions, cited brands earn 35% more organic clicks and 91% more paid clicks. Strengths are not optional.
Three numbers should sit on every account manager's desk before they open the report.
The buyer uses AI, the Day One shortlist wins 95% of the time, and AI-sourced traffic when it lands is qualitatively better. The report exists because this money is moving. Your job is to read the report well enough to direct that movement.
The five-section read
Every AI citation report I have ever opened has the same five sections, regardless of vendor. GenPicked surfaces them at /reports, and the same five-section pattern shows up in Profound, Otterly, Peec, AthenaHQ, and Scrunch. Read them in order. Each section drives exactly one type of action.
Section A (overall ACS) is the only number that goes on the cover slide of the QBR deck. Show the band, show last month's band, color-code the move. Don't open with the integer. The band is the story. Per the GenPicked ACS formula, the per-engine subscore in Section B is mentionRate × 60 + positionScoreAvg × 25 + mentionDensity × 15, capped at 100, weighted 0.35/0.25/0.25/0.15 across ChatGPT/Perplexity/Gemini/Claude. Failed engines get dropped and weights are re-normalized across the survivors. A Gemini outage doesn't crater your score. Surface that transparency to the client.
Never average engines. Per Profound's public data, ChatGPT mentions brands in roughly 73.6% of its answers; Claude mentions brands in 97.3%. The GenPicked Research Team (2026) Fitness Wearables study found Oura ranking #1 on GPT-5 (1.91) and Claude 4 (1.74) but only #3 on DeepSeek V3 (1.12). A single brand swings two rank positions across engines in the same week. An average hides the entire strategy.
The alert-to-action mapping (the translation framework)
The GenPicked MonitorDiffEngine emits nine typed alerts between snapshots. Each one carries a severity and a before/after context. Most agencies see "your visibility changed" and panic. The taxonomy gives you nine specific responses for nine specific signals. The table is the entire translation framework.
| Alert | Severity | Client action this week |
|---|---|---|
new_mention | positive | Document for the monthly report. If the query is high-intent, add to the case-study collection. No strategy change. |
lost_mention | critical | Same-week re-publish or re-pitch. Check who took the slot and what earned mention they got. Per RivalHound, brand mentions correlate 0.664 with AI visibility; hunt mentions, not links. |
position_dropped | warning | Cross-check source_changed on the same query. What new domain got cited that pushed your client down? Pitch that domain or its peers. |
position_improved | positive | Document. Add to monthly report wins. No tactical change required. |
sentiment_dropped | warning | Open the Perception Evolution word cloud. This is a PR action, not a content action. Review surge, product issue, news cycle. |
sentiment_improved | positive | Capture the descriptor change for the case study. Tie it to whatever PR or content shipped that month. |
new_competitor | warning | 30-minute competitive-intel session next week. What changed in their content or PR? Is it a one-query flash or a category move? |
competitor_lost | neutral | Note it. Usually a flash, not a trend. Don't celebrate publicly with the client. |
source_changed | warning | AI engines re-weighted the source list. Check the new domain. If you can pitch it, that's the action; if it's Wikipedia or Reddit, the action is participation. |
Match the engine to its dominant source surface. Per Discovered Labs' citation analysis, Wikipedia is 47.9% of ChatGPT's top-10 source citations and Reddit is 46.7% of Perplexity's. Per their Perplexity research, Reddit is cited 40% more often than corporate blogs. A red Perplexity column means "go participate in a Reddit thread," not "rewrite the homepage." A red ChatGPT column means "push for a Wikipedia mention if the brand passes notability." The taxonomy plus the engine-source map gives you a thirty-second decision per cell.
Start your 14-day free trial
Growth plan free for 14 days. Five AI engines. Full agency dashboard.
Start free trialThe 30-minute Monday-morning reading routine
Block 30 minutes on Monday of QBR week. Open the dashboard. Run the routine in order. Do not skip steps. Do not pause to think about the client meeting yet; you are reading the report, not preparing the meeting.
End-of-routine deliverable: a three-bullet action list. One engine focus. One content piece. One PR or Reddit push. Plus one or two strengths flagged for protection. That is what the client sees in the QBR — not the dashboard.
The client communication template (copy this verbatim)
The email below is the format I have seen agency owners stop losing retainers with. Send it the morning of the QBR, three bullets, evidence in the footer.
Subject: [Client] — this month's three priorities
Eyebrow: Read of your [month] AI Citation Report
1. Engine focus — Perplexity. Your Perplexity subscore is 18 vs ChatGPT 41. Action: we participate in 5 Reddit threads in [vertical] this month. Perplexity pulls 46.7% of its top-10 citations from Reddit.
2. Content piece — "[high-intent query]." Uncited on 4 of 5 engines despite buyer intent. Action: we publish a 100-150 word Q&A chunk with FAQ schema by [date].
3. Earned mention — [publication]. [Competitor] gained a citation here last week, pushing you from position 2 to 5 on [query]. Action: pitch by [date].
Two strengths we are protecting: [query 1], [query 2]. Citation Monitor alerts me the day either is at risk.
Evidence: Section A (band: emerging, 38 to 41), Section B (per-engine split), Section C (3 red commercial cells), Section E (2 strengths). Full report attached.
Common misreads to avoid
- Counting Google rank as a proxy for AI citation. Per Ahrefs' analysis of 863,000 keywords and 4M URLs, only 38% of AI Overview citations come from Google top-10 pages, down from 76% in July 2025. Per Profound, 28.3% of ChatGPT's most-cited pages have zero Google organic visibility. Rank held is not a defense.
- Reporting raw GA4 traffic. Per Coalition Technologies, only ~0.5% of ChatGPT-sourced traffic is correctly classified as organic in default GA4; per Cardinal Path, 60-70% of AI visits get misclassified entirely. Set up the custom channel group; per Yotpo, the fix recovers 3-5× the AI attribution that default GA4 reports show.
- Assigning generic schema as the action. Per Growth Marshal, generic copy-pasted JSON-LD underperforms no schema at all. Pair this with the SE Ranking 300,000-domain analysis showing llms.txt has no clear effect on citations. Neither belongs on a client action list.
- Reading every query equally. Total red cells in Section C is meaningless. Commercial intent is the only filter. "Best dental practice in Denver" red on ChatGPT and Perplexity is a five-figure pipeline miss. "What is a dental crown made of" red is fine.
- Reacting to position 1 organic CTR. Per Ahrefs' December 2025 update, position-1 organic CTR drops 58% when a Google AI Overview appears (0.073 to 0.016 on AIO-triggered informational queries). "Rank held, traffic dropped, citation moved" is the new shape of the QBR. Lead with ACS, not rank.
Before next Monday: pick one client, run the 30-minute routine end-to-end, and ship the three-bullet email. The first time you do it the action list will feel sparse. By the third client, you will have a routine. By the fifth, you will be reading the report faster than your competitors can format their dashboard screenshots.
This week's action checklist
The white-label tier mapping makes this scale. Starter ($97/mo) ships a GenPicked-branded basic PDF; Growth ($197/mo) ships a white-label PDF with your agency logo and the full agency dashboard; Scale ($397/mo) ships custom report templates and resale rights so you can productize the read as its own deliverable to your clients' clients. The 14-day trial drops you into the Growth tier — full white-label PDF, full agency dashboard, the same /reports page you'd give a paying client — so you can run the 30-minute read once and see whether the three-bullet email writes itself.
Start your 14-day free trial
Growth plan free for 14 days. Five AI engines. Full agency dashboard.
Start free trial