Am I in ChatGPT? The 60-Second Check Every Agency Should Run on Every Client This Week

Your client's best prospect opened ChatGPT this week and asked about your client's category. The AI responded with three brand recommendations. None of them was your client. The prospect closed the tab, emailed the named options, and your client never knew.

This is not hypothetical. 77% of brands are completely absent from AI platform responses according to a February 2026 analysis of 2,089 brands by Loamly. If your agency manages ten clients, roughly seven of them are invisible right now. The other three convert AI-sourced traffic at three times the rate of Google search, per the same study.

The timing matters. Per the 6sense 2025 B2B Buyer Experience Report, 94% of B2B buyers now use large language models during their purchasing journey. And 95% of the time, the winning vendor is already on the buyer's Day One shortlist. Translation: if your client isn't surfaced by ChatGPT, Perplexity, Gemini, Claude, or Google AI Overviews when the buyer is assembling that shortlist, the deal is over before the first email is sent.

Here is the 60-second check every agency should be running on every client this week. It costs nothing. And here is what you do when the number comes back bad.

Free — no commitment

Get your AEO Score in 60 seconds

See where your client ranks in ChatGPT, Perplexity, Gemini and Google AI Overviews — instantly.

Check my score

Why this matters right now (not next quarter)

Three numbers from the 6sense 2025 Buyer Experience Report that should sit on your desk the next time a client asks what you're doing for their pipeline:

94%
of B2B buyers use LLMs during purchase
95%
buy from the Day One shortlist vendor
83%
of the buyer journey happens before any sales contact

Put those three together. The buyer uses AI, the vendor they pick on Day One wins 95% of the time, and 83% of their decision is made before a salesperson gets involved. Your client's Monday-morning problem is not their sales team or their funnel conversion rate. It's whether they even make it onto the AI-generated shortlist at all.

The CTR math makes it worse. Per Ahrefs' December 2025 analysis, when Google AI Overviews appear, position-1 organic click-through drops 58%. And AI Overviews now trigger on 48% of tracked queries, up from 31% a year earlier. Being first in Google when the user's attention is consumed by a generated answer is not the same win it was 18 months ago. The answer is the new ranking.

The flipside is more interesting than the panic. Seer Interactive's September 2025 analysis of 3,119 informational queries found that being cited in an AI Overview is worth 35% more organic clicks and 91% more paid clicks than not being cited. Being invisible is expensive. Being present is disproportionately valuable. And per Conductor's 2026 benchmark, AI search visitors who do click through spend 68% more time on the website than organic search visitors. AI-sourced traffic is smaller and better at the same time.

What does it actually mean to be “in” ChatGPT?

Not all AI engines treat brands the same way. This is the most commonly missed fact in agency calls.

Per Profound's public data: ChatGPT mentions brands in roughly 73.6% of its answers. Claude mentions brands in 97.3%. Gemini and Perplexity fall between. The engine your client's buyer uses matters as much as the query itself.

Citation concentration is brutal. Per Ahrefs' analysis, the top 5 domains account for 38% of all AI Overview citations; the top 20 account for 66%. YouTube is cited in roughly 23.3% of AI Mode answers. Wikipedia in 18.4%. Google's own properties in 16.4%. For a mid-market B2B brand without Wikipedia or YouTube authority, this is a cold market.

The other surprise: 38% of pages cited in AI Overviews rank in Google's top 10 — which sounds reasonable until you see that 31% rank in positions 11-100, and 31% rank beyond position 100 entirely (Ahrefs, February 2026). In other words, traditional Google rank and AI visibility are drifting apart. Per Profound, 28.3% of ChatGPT's most-cited pages have zero Google organic visibility. You can be invisible on Google and cited on ChatGPT. Your agency reports to clients need to stop confusing the two.

The 60-second manual check (for up to 20 queries)

The published practitioner method from Ahrefs is unchanged, and it works fine for a single client with up to 20 queries. Past that, you need tooling.

01
Incognito

Open ChatGPT in an incognito window. Personalized results contaminate the check.

02
Natural queries

Ask 10-20 conversational questions a prospect would ask. Not keyword phrases.

03
Repeat 3×

ChatGPT varies run-to-run. Track consistency across 3 runs, not single answers.

04
Cross-engine

Repeat across Perplexity, Gemini, Claude, and Google AI Overviews. They disagree.

05
Track 4-6 weeks

Single-date snapshots mislead. Track visibility consistency over at least a month.

Calculate a mention consistency rate — (tests with mention ÷ total tests) × 100. At ten natural queries across five engines with three runs each, that is 150 data points per client per cycle. Across ten clients, 1,500 data points. This is where manual collapses and tool coverage starts.

Why your client is invisible: three real reasons, one that dominates

A lot of the 2025 AEO advice circulating online is vendor marketing, not research. Here is what the published evidence actually shows drives citations, ranked by effect size.

Reason 1: Domain authority (the dominant factor)

Per ZipTie's analysis, domain authority outweighs schema markup by roughly 3.5:1 in AI citation probability. In the study cited there, a site with 420 referring domains and perfect FAQ schema got 12% of AI citations in its space; a comparable site with 3,200 referring domains and no schema got 68%.

The real-world version of this for your clients: brand mentions across trusted sources matter more than structured data on your own page. Per RivalHound's correlation analysis, brand mentions correlate 0.664 with AI visibility. Traditional backlinks correlate 0.218. That is roughly a 3:1 advantage for mentions over backlinks, which reorders most SEO agency playbooks.

Reason 2: Reddit (dominant for Perplexity, smaller elsewhere)

This one is counterintuitive and easy to get wrong. Reddit dominates Perplexity — 46.7% of Perplexity's top 10 citations come from Reddit, per Discovered Labs' analysis. Across AI engines generally, 73% of product recommendations referenced Reddit in 2025 (CMSWire).

The trap: this does not mean pivot your entire AEO strategy to Reddit. Search Engine Land's analysis shows Reddit weight varies dramatically by engine and query. For most B2B clients, Reddit is one signal, not the whole picture. But any AEO playbook that ignores Reddit entirely is ignoring 46.7% of the Perplexity answer.

The weirder detail in the Reddit data: per Semrush's study of 248,000 Reddit posts cited by AI, more than 80% of the cited Reddit content has fewer than 20 upvotes or comments. This matters because most agencies assume “viral” Reddit posts are the ones getting cited. They are not. AI engines are pulling from thread replies, moderate-engagement comment chains, and niche subreddit discussions. The implication is that agencies can reasonably compete here; you do not need to post a megaviral thread, you need to contribute quality comments to relevant threads where your client's category is discussed.

Reason 3: Content structure (real but smaller than marketed)

FAQ schema is real. Per Frase's research, pages with FAQPage markup are 3.2× more likely to appear in Google AI Overviews. Per AI Boost, pages with FAQ schema plus inline citations are weighted approximately 40% higher in ChatGPT source selection.

Content chunking also real. Per Am I Cited, sections in the 100-150 word range receive roughly 4.7 citations per page vs 4.3 for sub-35-word sections. The 50-150 word sweet spot that circulates in AEO marketing is directionally right, but the measured effect is smaller than schema and much smaller than domain authority.

Key insight

If you only have bandwidth for one lever, work on earned brand mentions in trusted publications — not schema, not llms.txt, not FAQ rewrites. The 3:1 effect size swamps everything else.

What doesn't work (the vendor marketing trap)

A lot of AEO advice is selling you something.

llms.txt does not work. The single largest study on this — SE Ranking's analysis of approximately 300,000 domains — found zero correlation between llms.txt presence and AI citations. The file is found on only 10.13% of measured domains. In some model tests, removing it actually improved accuracy. Neither Google nor OpenAI recommend relying on it. If a vendor pitches llms.txt as their optimization lever, ask what their second lever is.

Generic schema underperforms no schema. Per Growth Marshal's study, pages with generic schema were cited at 41.6%, vs 59.8% for pages with no schema at all, and 61.7% for pages with attribute-rich Product or Review schema (pricing, aggregateRating, specs). The data point: schema works when it is specific, detailed, and product-attribute-rich. Copy-pasting generic JSON-LD onto every page is worse than doing nothing.

Single-engine dashboards hide the real picture. A dashboard that gives you one “AI visibility score” averaged across all engines hides the most useful finding — that the same brand can be #1 on Claude and invisible on ChatGPT. Per Profound's data, the spread between Claude (97.3% brand mention rate) and ChatGPT (73.6%) means an averaged number tells you nothing. Always split results by engine.

The AEO tool market (for agencies evaluating vendors)

Agency owners are being pitched by a dozen AEO vendors right now. The funding in the category is real — per Conductor's State of AEO/GEO Report, 56% of CMOs and digital leaders made significant AEO investments in 2025, and 94% plan to increase spend in 2026. But the vendor landscape varies enormously in price, scope, and methodology. A short factual guide:

Profound raised $96M Series C at a $1B valuation in February 2026, with 700+ enterprise customers including 10%+ of the Fortune 500. Enterprise-tier tool; not priced for SMB agencies.

Peec AI raised a $21M Series A in November 2025, with 1,300+ brands and agencies onboarded and 300+ new customers per month. Pricing: $89/mo Starter, $199/mo Pro, $499/mo Enterprise (plus per-LLM add-on fees).

Scrunch AI raised $19M total ($4M seed, $15M Series A) and reports 500+ paying customers growing 50% month-over-month. Pricing is $300/mo Agency and $500/mo Agency Core.

Otterly is bootstrapped and profitable at roughly $770K revenue per GetLatka data, named a Gartner Cool Vendor 2025. Pricing is $29/mo covering six AI engines — the cheapest serious option for a single-client starter audit.

AthenaHQ is Y Combinator-backed, $2.7M total raised, with 70+ early adopters. Pricing tiers are $79, $149, and $299/mo scaled by brand count, competitor count, and query allowance.

None of these are GenPicked competitors in the agency-first price band at full scope — GenPicked sits between Peec AI and Scrunch on price ($97-397/mo Agency plan + per-brand tiers) and is built for agency workflows rather than enterprise procurement cycles. But the honest recommendation: if you have one client and want to validate AEO is real before committing budget, Otterly's $29/mo gets you started and the data will be good enough to make the case to the client.

The attribution problem (and why every agency's reporting is broken)

If your client converts because a prospect saw them in a ChatGPT answer, your analytics probably log it as “direct” traffic. ChatGPT, Perplexity, and several other AI engines do not reliably pass referrer headers. Per Coalition Technologies' analysis, only about 0.5% of ChatGPT-sourced traffic is correctly classified as “organic” in GA4. The rest disappears into “direct” or gets re-attributed to whatever last-touch channel the prospect used next.

This matters because agency monthly reports are built on the attribution layer GA4 provides. If ChatGPT is driving an increasing share of your client's pipeline and showing up in your reports as “direct,” you are invisible in your own reports for the work you did not do. The fix is partial — build custom filters that identify AI-sourced traffic by landing page path, user-agent patterns, and UTM hygiene. Per Yotpo's tracking guide, the setup is fiddly but can recover 3-5x the AI attribution that default GA4 reports show.

The take for agencies: even if your AEO work is producing results, your client will not see them unless you fix attribution first. Start there, before you start pitching AEO retainers.

What to do this week (four steps, all free, all finished by Tuesday)

The right first actions are small and concrete. Nothing here requires a paid tool.

  • Run a 10-query manual audit on your top three clients across five engines.
    Use the 5-step method above. Expect to find 1-2 of the 3 clients invisible in most queries. That is your baseline.
  • List the 10 trusted sources each client's buyers read.
    Industry publications, research firms, Reddit subreddits, YouTube channels, influencer sites. This is your earned-mention target list.
  • Pick the lowest-scoring client and fix one content structure issue.
    Restructure one pillar page to 100-150 word sections with Q&A headings and attribute-rich schema. Don't do all their pages. Just one page, well.
  • Schedule a re-check for 14 days from now.
    The same 10 queries, same 5 engines. Compare baseline to new scores. This is the email that gets retainers renewed.
Do this

Budget for this is zero. Time for three clients is roughly three hours total. The client with the largest visibility gap will show the first measurable improvement; use that as the case study on your next sales pitch.

At ten clients, this stops being doable manually. GenPicked's Growth plan ($197/month) runs the audits automatically, tracks citations daily across all five engines, and produces white-labeled agency reports. It exists because what you can do manually on three clients you cannot do on thirty.

Start with one client, free

Check your AEO Score now

60 seconds. Five AI engines. no commitment. See exactly where your client stands.

Check my score

Joseph K. Banda

Co-Founder, GenPicked

Building the AEO platform for marketing agencies. Helping agency owners get their clients cited by ChatGPT, Perplexity, Gemini, Claude, and Google AI Overviews — and prove it with data.

Credentials:

Co-Founder, GenPicked, AEO / GEO / AI Visibility platform for agencies, ACS (AEO Citation Score) framework architect

Frequently Asked Questions

What is AEO?

Answer Engine Optimization (AEO) is the practice of structuring content, earning brand mentions, and building domain authority so AI engines like ChatGPT, Perplexity, Gemini, Claude, and Google AI Overviews cite your brand when answering user queries. It is sometimes called GEO (Generative Engine Optimization), AI Visibility, or LLM SEO. The practice emerged as a distinct discipline in 2024-2025 as AI-generated answers replaced traditional search results for a growing share of queries.

How do I check if my brand is in ChatGPT?

Two methods. Manual (Ahrefs' published practitioner method): open ChatGPT in an incognito window, ask 10-20 natural questions your prospects would ask, run each 3 times, track which answers cite your brand. Repeat for Perplexity, Gemini, Claude, and Google AI Overviews. Feasible for up to 20 queries across one client. Automated: use a dedicated AEO Score tool like GenPicked's free checker that runs the same check across all five engines in 60 seconds and produces a 0-100 benchmark score plus prioritized gaps.

What percentage of brands are invisible to ChatGPT?

Per Loamly's February 2026 analysis of 2,089 brands, 77% are completely absent from AI platform responses. Brand-mention rates vary dramatically by engine — Profound's data shows ChatGPT mentions brands in 73.6% of its answers, while Claude mentions brands in 97.3%. The single most common mistake is averaging visibility across engines rather than tracking each one separately.

Does ranking #1 in Google mean I'll be cited in ChatGPT?

No. Per Ahrefs' February 2026 analysis, only 38% of pages cited in Google AI Overviews rank in the Google top 10 for that same query — and 31% rank beyond position 100 entirely. Per Profound's research, 28.3% of ChatGPT's most-cited pages have zero organic Google visibility. AI engines and Google have drifted apart. Treat them as separate ranking systems.

Does FAQ schema actually improve AI citations?

Yes, but less than vendor marketing claims. Per Frase's research, pages with FAQPage markup are 3.2× more likely to appear in Google AI Overviews. But per ZipTie, domain authority outweighs schema by roughly 3.5:1 in overall citation probability. Generic, copy-paste JSON-LD schema actually performs worse than no schema at all per Growth Marshal's study. Invest in attribute-rich Product, Review, and FAQ schema — not generic Article schema.

Does llms.txt help my brand get cited by AI?

No, according to the largest published study on the question. SE Ranking analyzed approximately 300,000 domains and found zero correlation between llms.txt presence and AI citations. The file appears on only 10.13% of measured domains. Neither Google nor OpenAI recommend relying on it. Adding llms.txt is low-effort and low-risk, but it is not a citation lever.

What actually drives AI citations if not schema and llms.txt?

Three things, in descending order of measured effect size: (1) Domain authority — specifically brand mentions across trusted third-party sources, which correlate 0.664 with AI visibility vs 0.218 for backlinks per RivalHound's analysis. (2) Platform-specific sourcing — Reddit accounts for 46.7% of Perplexity's top citations per Discovered Labs, so Reddit mention strategy matters disproportionately for that engine. (3) Content structure — FAQ schema (3.2× lift per Frase) and 100-150 word content chunks (Am I Cited). All three compound. None of them is a shortcut.

How long does it take to improve AI visibility?

Expect 14 days to see the first citation changes after structural fixes. 30-60 days for meaningful improvement. HubSpot's published AEO case study shows a 1,850% increase in qualified leads using AEO methods over a longer horizon. For a single agency client working consistently, a measurable delta in 14-28 days is realistic and is the cadence most retainers should be reporting on.

Is this the same as GEO or AI Visibility?

Essentially yes. The industry has not settled on one name. AEO (Answer Engine Optimization), GEO (Generative Engine Optimization), AI Visibility, and LLM SEO are used interchangeably in agency and practitioner writing. Per Search Engine Land's analysis, 59% of SEO influencers reference GEO; others prefer AEO. The underlying work and the measurement are the same regardless of which term wins. We default to AEO because it reads as a natural evolution of SEO, the term agencies already sell.

Which AI engines should I be tracking?

Five that matter for most B2B work: ChatGPT (largest traffic share at 87.4% of all AI referral per Conductor 2026), Perplexity (cites sources heavily and Reddit-dominated), Gemini and Google AI Overviews (now triggering on 48% of queries per Ahrefs), and Claude (highest brand-mention rate at 97.3% per Profound, though smaller traffic footprint). Tracking fewer engines gives you an incomplete picture; the brand that wins on Claude can lose on ChatGPT.

Get Your Brand's AEO Score

See how your brand is performing in AI search with our free AEO audit.

Start Your Free Audit
#aeo#geo#ai-visibility#chatgpt-seo#llm-seo#answer-engine-optimization#agency-playbook