AEO Reads What AI Is Saying About Your Clients. Here Is How to Shape It.

AEO Reads What AI Is Saying About Your Clients. Here Is How to Shape It.

In this article, you will learn the specific mechanism by which AI assistants generate brand narratives, the Lorphic 2026 finding that makes those narratives a steerable surface for any agency with AEO discipline, and the five-step audit an AEO-equipped agency runs on any client this week to set the default the engines return.


The finding agencies can build a service around

Lorphic's 2026 research on AI brand information produced one number and one quote that an agency owner serving any B2B client should know by memory.

The number: 86 percent of the source data AI models pull from is brand-managed. The vast majority of what ChatGPT, Claude, Perplexity, and Gemini "know" about a brand traces back to content the brand itself controls, directly or through earned media that originated from brand outreach.

The quote: "What goes uncorrected today becomes tomorrow's default narrative."

Read those two together and the strategic picture is bullish for agencies. The narrative AI tells about your client is largely a brand-controlled output. AEO is the discipline that reads the current narrative, identifies where it diverges from what the brand actually offers, and feeds back into the owned-content surface the engines retrieve from. The window is wide open in 2026, and the engines reward agencies that move first.

This is not theoretical. Multiple practitioner experiments in 2025 and 2026 documented specific cases where ChatGPT reported pricing that was never published, attributed product features to companies that do not offer them, named executives who no longer hold their roles, and listed acquisitions that never happened. The pattern is not random. The cases cluster around brands whose owned-content presence is thin, whose press releases are stale, and whose Schema.org markup is incomplete. Each of those is a fixable surface, which means each is an AEO retainer item with a measurable before-and-after.

For an agency that has an AEO program in place, this is the highest-leverage service line in the book. GenPicked exists to make the audit defensible: the same prompts every time, the same engines, the same scoring, every quarter, so the change you produce is the change the engine reports back.


Why misinformation happens at the source

The mainstream framing of AI errors is "the model hallucinated." The Lorphic finding reframes it: the model retrieved from the best available source, and the best available source was wrong or out of date.

Three specific mechanisms produce most AI brand misinformation in 2026.

Mechanism 1: Stale brand-controlled content. Press releases from 2022 still rank for brand queries in 2026 because they are well-indexed and the brand never updated them. AI retrieves the 2022 version, summarizes it, and reports the 2022 leadership team as current. The brand could fix this with a single press release update; most do not.

Mechanism 2: Missing or contradictory Schema.org markup. Two pages on the same brand site list different pricing tiers. The AI retrieves whichever page ranked first for the relevant query and reports the price from that page. The user gets an answer that is internally inconsistent with the brand's actual current pricing.

Mechanism 3: Earned-media drift. A 2024 industry article described the brand's product in a way that no longer reflects the current capability. The brand never corrected the article. AI retrieves the 2024 description because it is in a high-authority publication, and the AI reports it as current. The University of Toronto's 2025 finding that 82 to 89 percent of AI citations come from earned media means this drift mechanism affects the majority of the AI's brand-relevant retrieval surface.

In each mechanism, the source is correctable. The brand owns the upstream content or has a relationship with the publication that produced it. The cost of correction is dollars and hours, not new infrastructure. The blocker is that nobody on the brand or agency side is running the audit that surfaces the specific errors.


The five-step misinformation audit

Specific moves an agency can run on a client brand this week to surface and address the most consequential AI brand inaccuracies.

Step 1: Run identity queries on all major engines.

For each client brand, issue the following queries to ChatGPT, Claude, Perplexity, and Gemini in fresh sessions (logged-out, no personalization):

  • "What is [brand name]?"
  • "Who founded [brand name] and when?"
  • "What products does [brand name] offer?"
  • "What is [brand name]'s pricing?"
  • "Who is the CEO of [brand name]?"
  • "What is [brand name]'s headquarters location?"
  • "Who are [brand name]'s main competitors?"
  • "What recent news is there about [brand name]?"

Capture the verbatim AI responses. Cross-reference against the brand's current ground truth. Highlight every factual error.

This step takes 30 to 45 minutes per brand. It surfaces the most embarrassing misstatements first, which is also the set the client will fund correction work on immediately.

Step 2: Categorize errors by mechanism.

Sort each error into one of three buckets:

  • Stale-content errors: The upstream source is brand-controlled and out of date. Fix: update the canonical source page.
  • Schema-inconsistency errors: Two brand-controlled pages contradict each other. Fix: pick the canonical page and update or redirect the others.
  • Earned-media-drift errors: The upstream source is third-party content. Fix: outreach to the publication for an update or correction.

The categorization is the work that turns a list of errors into a corrective action plan. Without it, the audit produces complaints, not fixes.

Step 3: Triage by commercial impact.

Not every error matters equally. Misrepresented pricing matters more than misrepresented headquarters location. Misnamed CEO matters more than outdated product feature lists. Triage the error set by how much commercial risk the error introduces, then attack the high-impact errors first.

The triage criterion is roughly "would a prospective customer making a procurement decision be misled by this." If yes, P0. If only a casual reader would notice, P2.

Step 4: Correct the upstream sources.

For stale brand-controlled content, the correction is a content update and a sitemap re-submission to major engines. For Schema-inconsistency errors, the correction is a deliberate canonical decision plus markup updates on the non-canonical pages. For earned-media-drift, the correction is outreach to the publication.

The earned-media outreach is the most labor-intensive but also the most leveraged. A single correction at Forbes, TechCrunch, or a vertical trade publication often updates the AI's retrieval pool for that brand within weeks. We covered the earned-media-dominance finding in our piece on AI search divergence; brand corrections at the earned-media source are disproportionately effective.

Step 5: Re-run the queries after corrections.

Two to four weeks after the corrections are made, re-run the identity queries from Step 1. Note which corrections propagated and which did not. The non-propagating corrections are clues: either the upstream source did not actually update, or the AI's retrieval pool included additional sources you did not initially identify.

The re-run is also the data point that demonstrates the value of the agency's work to the client. "We corrected six material errors about your brand in AI assistants this quarter" is a board-meetable deliverable that few agencies are currently producing.


Why this is a defensible agency service line

Three reasons the brand-misinformation audit is a stronger commercial position than the standard AEO retainer.

Reason 1: The work is immediately legible. A visibility-score retainer requires the client to trust the methodology behind a number. A misinformation correction is concrete: "ChatGPT was saying you have 50 employees; now it says you have 200." The client can verify the fix themselves. The trust threshold is lower.

Reason 2: The downside cost of inaction is high. A weak visibility score is a missed opportunity. An active misinformation problem is a credibility threat. Misrepresented pricing affects sales conversations. Wrong leadership info affects investor due diligence. The "what goes uncorrected today becomes tomorrow's default narrative" finding is the urgency hook. Agencies that frame misinformation as "we are protecting your brand right now" sell harder than agencies that frame it as "we are tracking visibility long-term."

Reason 3: The methodology is transparent. Unlike visibility-score measurement, where vendor methodology is mostly opaque, misinformation auditing is procedure-driven. The audit steps are visible to the client. The agency cannot hide behind a proprietary score; the agency is doing legible work the client can supervise. This makes the service harder to commoditize and easier to defend at renewal.


What this means in light of earlier findings

The misinformation audit interlocks with three findings we have covered in this Academy.

With the Day One shortlist research: Misinformation in AI assistants directly affects whether a brand makes the shortlist. A B2B buyer who asks ChatGPT "what do you know about [client brand]" and receives outdated or wrong information is forming an inaccurate first impression. The shortlist filtering happens before the buyer ever visits the brand's website. We covered this in the Day One shortlist piece. Misinformation correction is upstream of shortlist inclusion.

With the SEME and ranking manipulation research: The same trust transfer that makes AI recommendations persuasive when correct also makes them damaging when incorrect. Users trust AI assistants more than they trust ranked lists, which means misinformation in AI carries more credibility than the same misinformation would in a Google snippet. We covered the trust-transfer mechanism in our SEME-to-AI piece.

With the methodology transparency thesis: A measurement tool that does not distinguish accurate from inaccurate citations is reporting visibility-of-mention, not visibility-of-correct-mention. A brand that is widely cited but widely cited incorrectly is in a worse position than a brand that is moderately cited and accurately cited. The methodology question we raised in the methodology transparency article is sharpened by the misinformation finding: the dashboard number should include an accuracy dimension, not just a frequency dimension.


What the research does NOT say

Three over-readings to resist.

The research does NOT say AI is fundamentally unreliable. The 86 percent brand-managed source figure is also the optimistic finding. It means most AI brand information is, in principle, accurate when the brand keeps its inputs current. The misinformation problem is mostly a maintenance problem, not an inherent AI flaw.

The research does NOT say every brand has a serious misinformation problem. The audit will reveal a wide distribution. Some brands have zero material errors in major AI responses; some have many. The point is to know which category a specific client is in, not to assume universal misrepresentation.

The research does NOT say agencies can guarantee zero misinformation. Some errors persist after corrections because the upstream caching, training, and retrieval mechanisms of AI engines are not fully under any vendor's control. The agency's job is to reduce the surface area substantially and document the work, not to promise perfection.


How to talk about this with a new client prospect

Three sentences agencies can use when a prospect asks why they should add misinformation auditing to their existing AEO retainer.

"Lorphic's 2026 research found that 86 percent of the source data AI models use about brands is brand-managed, which means most AI errors about your brand are correctable through specific content updates."

"The risk is not just one wrong answer. The original research described the dynamic as 'what goes uncorrected today becomes tomorrow's default narrative,' meaning errors compound the longer they go unaddressed."

"We are offering a five-step misinformation audit that surfaces the specific errors AI assistants are publishing about your brand right now, categorizes them by correction mechanism, and produces a board-ready report on what we fixed."

That is a defensible pitch. It cites a real research finding, names a real urgency mechanism, and offers a specific deliverable.


Frequently asked questions

How is "brand misinformation" different from "AI hallucinations"?

A hallucination is the model fabricating information with no upstream source. Brand misinformation in the Lorphic framing is the model accurately retrieving from upstream sources that are themselves wrong or stale. The distinction matters because the corrections are different. Hallucinations require model-level changes that are outside the brand's control; misinformation from stale sources is correctable by the brand.

Is the 86 percent brand-managed source figure peer-reviewed?

The Lorphic 2026 finding is a practitioner-level study, not peer-reviewed research. The number is directionally consistent with adjacent research (including the University of Toronto 2025 finding that 82 to 89 percent of AI citations come from earned media), but it should be treated as a directional industry data point rather than a precise scientific measurement.

How often should the misinformation audit be run?

For active client retainers, quarterly is the right cadence for surface-level identity queries. Monthly is justified for high-stakes B2B clients in fast-moving categories. The earned-media drift mechanism is slow enough that quarterly captures most errors; the stale-content mechanism updates faster when the brand pushes new content, so monitoring is more effective when paired with content release schedules.

What if the client refuses to fund earned-media outreach?

Most agencies cannot run earned-media outreach on a low-margin AEO retainer; it is its own service line. The pragmatic fallback is to triage corrections by mechanism, fix the brand-controlled content first (highest ROI on agency effort), and present the earned-media drift errors as a separate proposal to the client. Some clients will fund it; some will accept the residual error rate.

Does GenPicked handle the misinformation audit specifically?

Our scan reports both visibility and the specific content of each engine's response, which is the raw material for the audit. The categorization, triage, and correction work is the agency's value-add. We provide the input data; the agency runs the audit process described above.

Can AI engines be relied on to update once corrections are made?

Imperfectly. Some engines (notably Perplexity, which uses near-live retrieval) update within days. Others (more dependent on cached training and indexing) can take weeks or months. The re-run in Step 5 is the empirical check on whether corrections propagated.


Related reading


Run the misinformation audit yourself

The fastest way to see whether AI assistants are publishing accurate information about your client's brand is to run the eight identity queries from Step 1 and compare the responses against ground truth. Run a free GenPicked AEO audit to get the multi-engine responses pre-captured for review.

Start your 14-day free trial of GenPicked Growth →


Dr. William L. Banks III is Founder of GenPicked. The Lorphic 2026 brand misinformation research and the University of Toronto 2025 earned-media-dominance findings are the primary sources for this article; full citations available on request.

Dr. William L. Banks III

Co-Founder, GenPicked

Get Your Brand's AEO Score

See how your brand is performing in AI search with our free AEO audit.

Start Your Free Audit
#academy#blog#news-jack#brand-misinformation#brand-protection#agency-strategy#r3