Profound is built for enterprise. GenPicked is built for agencies. For sub-100-employee agencies running 5-20 client brands, the workflow shape and per-brand pricing of GenPicked beats Profound's enterprise sales motion — not because Profound is worse, but because the procurement model on the other side of the “customized enterprise pricing” page does not fit a $3,000 retainer.
That is the verdict. The rest of this post defends it section by section, with a small comparison table inside each section and a Verdict card at the end. Every claim about Profound is sourced to their own material or named outlets (Fortune, GlobeNewswire, Wilson Sonsini). Every claim about GenPicked is from the public pricing page and the published ACS formula.
The market shifted underneath both products. Per Conductor's State of AEO/GEO 2026, a survey of 250+ enterprise digital leaders, 56% of CMOs made significant AEO/GEO investment in 2025 and 94% are increasing spend, with AEO/GEO ranking the #1 strategic marketing priority and 97% reporting positive impact. Per the 6sense Buyer Experience Report 2025, 94% of B2B buyers now use LLMs during the buying journey. And per Loamly's State of AI Traffic 2026, 77% of analyzed brands — 1,619 of 2,089 — score below 5/100 on AI visibility, while the visible brands convert at three times the Google rate. That is the gap every AEO platform is selling into. The question for an agency owner is which platform's procurement model lets them stand inside it on Monday morning.
Funding and posture: the procurement gap
Profound's February Series C announcement states the platform is “purpose-built to track brand visibility, sentiment, and performance across AI answer engines.” The round was led by Lightspeed with Sequoia, Kleiner Perkins, Khosla, Saga VC, South Park Commons, and Evantic participating — total funding past $155M at a $1B valuation per the Wilson Sonsini deal note. Disclosed logos include Target, Walmart, Figma, Ramp, MongoDB, Chime, U.S. Bank, and Charlotte Tilbury. None of that is a critique. It is a description of who Profound was financed to sell to.
GenPicked sits on the other side. Published platform tiers, self-serve checkout, per-brand pricing that scales linearly with portfolio size. The blended ARPU across the customer base lands in the $350-$500/month range. There is no sales cycle on the path from interest to first audit.
| Procurement axis | Profound | GenPicked |
|---|---|---|
| Funding posture | $155M+ raised, $1B valuation | Bootstrapped, early stage |
| Public pricing | “Customized enterprise pricing” | $97 / $197 / $397 + per-brand |
| Path to first audit | Request a quote, sales cycle | Self-serve, minutes |
For a 12-person agency, the procurement gap is the whole decision. A $1B-valuation enterprise sales motion does not fit a $3k retainer. The verdict on this axis is GenPicked, by default of pricing being on the website.
Engine coverage: breadth vs. weighted relevance
Profound has the wider net. Per Profound's own GEO tools post, the platform covers 10+ AI answer engines: ChatGPT, Perplexity, Claude, Gemini, Google AI Overviews, Google AI Mode, Microsoft Copilot, Grok, DeepSeek, Meta AI, and Amazon Rufus. GenPicked covers five: ChatGPT, Perplexity, Gemini, Claude, and Google AI Overviews. On the count alone, Profound wins.
The count is not the only number that matters. Per Semrush's chatgpt.com analytics, ChatGPT alone dominates LLM consumer traffic. The ACS engine weights — ChatGPT 0.35, Perplexity 0.25, Gemini 0.25, Claude 0.15 — are calibrated to consumer LLM share, not to brand-mention density. For an agency whose clients are plumbers, dentists, Shopify brands, and B2B SaaS in the United States, five-engine coverage of the consumer-volume engines lands close to the practical visibility surface. The marginal value of Grok, Amazon Rufus, DeepSeek, or Meta AI tracking depends entirely on whether your clients have X-native audiences, retail exposure, emerging-market footprints, or Meta-platform reach.
| Engine layer | Profound | GenPicked |
|---|---|---|
| Core consumer engines | ChatGPT, Perplexity, Gemini, Claude, AIO | ChatGPT, Perplexity, Gemini, Claude, AIO |
| Additional engines | Grok, DeepSeek, Meta AI, Amazon Rufus, Copilot, AI Mode | None |
| Weighted blending | Aggregated, not publicly disclosed | Published weights (0.35 / 0.25 / 0.25 / 0.15) |
If a single client of yours has heavy retail Amazon exposure, runs X paid social hard, or sells into Chinese-speaking markets where DeepSeek matters, Profound's 10+ engine coverage is not a vanity stat. It is the line item that justifies the enterprise contract. Engine breadth is the one place Profound's procurement burden buys you something a published per-brand price cannot.
Profound wins engine breadth. GenPicked covers the engines that drive the majority of consumer LLM volume and publishes the weights it blends them with. For most sub-100-employee agency portfolios, the GenPicked surface is sufficient; for retail-Amazon, X-heavy, or Asia-facing clients, Profound's extra engines are decisive.
Pricing math against a real retainer
The Profound pricing page reads, verbatim: “Currently available through customized enterprise pricing.” That is the entire pricing disclosure. Third-party reviewers have floated rough tiering — entry around $99/mo, mid around $399/mo, enterprise $2,000-$5,000+/mo — but those numbers are not from Profound's own site and should be treated as estimates. If you are building a Q3 budget, you have to request a quote and wait for the sales cycle to return.
GenPicked publishes everything. Platform tiers Starter $97/mo, Growth $197/mo, Scale $397/mo, with per-brand AEO add-ons at $75 / $149 / $299 / $525 per brand per month depending on the depth of monitoring. A typical mid-size agency runs a $197 platform plan with five brands at $75 each, totaling roughly $572/month all-in. Annual discount lands at approximately 20% off. The line item drops straight into a $1,500-$5,000/mo retainer without renegotiating anything.
| Pricing axis | Profound | GenPicked |
|---|---|---|
| Published price | None | Yes, on /pricing |
| Scales with portfolio | Contract renegotiation | Per-brand seat addition |
| Typical mid-agency spend | Estimated $2k-$5k+/mo (third-party) | ~$572/mo all-in (5 brands) |
| Free public score tool | No | Yes, free ACS at /tools/aeo-score |
The free public ACS tool is an underrated agency asset. It is top-of-funnel outbound bait: score a prospect's domain in under a minute, walk into the cold-email or the discovery call with a real number and a band classification (invisible, emerging, competitive, category-leader), and lead the conversation with data instead of pitch. Profound's enterprise procurement motion does not need that surface area; an agency does.
Published pricing is the agency-fit signal. The line item that can be quoted in a Tuesday discovery call without three internal calls and a procurement form lives at GenPicked. Verdict is GenPicked, decisively, on pricing transparency.
Methodology you can defend in a client review
Picture the Wednesday client meeting. Your client asks: “Why did our score drop from 47 to 41 this month?” If your platform's answer is “proprietary algorithm,” the meeting ends with the client asking what they are paying for. If the answer is “ChatGPT subscore dropped 12 points because mentionRate fell from 0.68 to 0.51 on three tracked queries — here are the queries, here is the diff,” the meeting ends with a renewal conversation. That is the agency reality vendors skip.
GenPicked publishes the ACS formula. Per-engine subscore is calculated as mentionRate × 60 + positionScoreAvg × 25 + mentionDensity × 15, capped at 100. Engine weights are ChatGPT 0.35, Perplexity 0.25, Gemini 0.25, Claude 0.15, with failed engines dropped and weights re-normalized across the remaining engines — a Gemini API outage never drags the score to zero. Brand detection runs on the full response text with case-insensitive word-boundary regex covering www. prefix, bare domain, base name, capitalized, and all-caps variants. A URL attribution to the client domain in a sources[] array also counts as a citation even if the brand name is not in the narrative text.
The methodology rigor behind the formula is the GenPicked Research Team Fitness Wearables Study, which applied a Bradley-Terry pairwise comparison model — the same maximum-likelihood estimation method used in academic preference research and sports-ranking systems — to AI-engine prompts across GPT-5, Claude 4, Gemini 2.5, and DeepSeek V3 for Oura, Whoop, Garmin, Apple Watch, and Fitbit. Oura ranked first overall (Bradley-Terry score 1.82, 95% CI [1.71, 1.94]) with statistically meaningful separation from Whoop (1.44). The study also surfaced the diagnostic agencies need: Claude is 6.7× more reactive to brand anchoring than GPT-5 — the empirical reason model-split reporting is non-negotiable for valid AEO measurement.
| Methodology axis | Profound | GenPicked |
|---|---|---|
| Scoring formula | Aggregate stats published, algorithm not | Per-engine formula published |
| Engine weight blending | Not disclosed | 0.35 / 0.25 / 0.25 / 0.15 disclosed |
| Underlying research | Proprietary prompt-volume + crawler data | Bradley-Terry pairwise study, 95% CIs |
AI engines disagree at the answer layer, and that is where transparent methodology stops being academic. Per Profound's own published data, ChatGPT mentions brands in roughly 73.6% of answers; Claude mentions them in 97.3%. Per the Spotlight Articles February tracking guide, Grok and Copilot land at 90%+, while AIO sits around 48.5%. A score that averages all of them tells you nothing useful. The ACS engine weighting is calibrated to consumer LLM share, not brand-mention density. Both are defensible choices — only one of them is published, and only the published one is defensible in a client meeting without saying “trust us.”
Profound has the deeper proprietary data layer (prompt volume, crawler analytics, aggregate brand-mention stats). GenPicked has the published formula and Bradley-Terry-backed methodology. For agency-to-client defensibility, the published formula wins; for in-house BI-team analytics, the deeper proprietary stack wins.
Agency workflow: multi-brand, white-label, diff alerts
Profound's workflow is shaped around a single enterprise brand with a marketing team, a BI team, and a procurement department. GenPicked's workflow is shaped around an agency running 5-50 brands across an organization, where the operator is the same human who pitched the retainer and writes the monthly report.
The shape difference shows up in three places. First, multi-brand workspaces with per-brand AEO seats that scale linearly — add a client, add a brand. Second, the diff engine that classifies every snapshot change as one of nine event types: new_mention, lost_mention, position_improved, position_dropped, sentiment_improved, sentiment_dropped, new_competitor, competitor_lost, source_changed — tagged severity critical, warning, positive, or neutral. A lost_mention always carries critical severity, with an alert payload like “Brand X disappeared from 'best running shoes for flat feet' on GPT-5” and before/after snapshot context. That is the unit of evidence that gets a retainer renewed.
Third, white-label is native and tier-gated. GenPicked-branded reports on Free and Starter, agency-logo PDF swap on Growth and Pro, custom report templates plus resale rights and a fully white-labeled client portal on Scale. Reports come as both a web page at /reports and a downloadable PDF at /api/reporting/export-pdf. Profound's agency surface lives via the partner-led Profound Ecosystem (training, certification, agency marketplace, Profound University). The Ecosystem is real and growing — it is just not the same shape as a native white-label PDF that an agency owner can re-skin before tomorrow's QBR.
| Workflow axis | Profound | GenPicked |
|---|---|---|
| Workflow built for | Single-brand enterprise team | Multi-brand agency portfolio |
| White-label reports | Partner-led via Agency Marketplace | Native white-label PDFs (Growth+) |
| Diff/alert taxonomy | Proprietary, not publicly documented | 9 named event types, 4 severity levels |
| Monthly client report format | Enterprise-team configurable | Web + PDF, agency-logo by default |
Profound Agents is a real product surface. Per the Series C blog, autonomous agents are used by 500+ customers daily for content generation and optimization. That is the kind of automation an enterprise marketing team can absorb. An agency producing white-label QBR decks needs different surface area — a reskinnable PDF that hits a client inbox on the first of the month — but it is fair to credit Profound for shipping autonomous workflow that GenPicked has not yet matched at the same scale.
For multi-brand agency workflow with white-label client deliverables, GenPicked is built for the shape. For enterprise automation at scale, Profound's agent surface is genuinely ahead. Verdict on agency workflow specifically: GenPicked.
Compliance, customer base, and the trust conversation
Profound's compliance posture — SOC 2 Type II and HIPAA-attested via Sensiba LLP per the Profound homepage trust copy — is not a marketing flourish. For agencies that serve healthcare systems, regulated finance, or government, it is the floor. If your retainer book is hospital networks and RIA firms, Profound's posture clears procurement in a way GenPicked's standard SaaS posture currently does not.
Brand recognition is the second axis. Being on Lightspeed's portfolio, on the Fortune cover, at a $1B valuation, with Target and Walmart as logos — that recognition cost $155M+ of capital and it shortens the trust conversation when an enterprise procurement team is doing due diligence. The same way HubSpot's brand presence in the SMB conversation is not a feature, Profound's brand presence in the enterprise CMO conversation is not a feature either. It is a real asset.
| Trust axis | Profound | GenPicked |
|---|---|---|
| Compliance | SOC 2 Type II, HIPAA-attested | Standard SaaS posture |
| Disclosed customer base | 700+ enterprises, 10%+ Fortune 500 | Agency portfolios, private |
| Logo recognition with CMOs | Target, Walmart, Figma, Ramp, MongoDB | Agency-first, no published enterprise logos |
For HIPAA-bound clients or enterprise procurement, Profound clears the bar. For agencies whose end-clients are SMB plumbers, dentists, Shopify brands, B2B SaaS, and regional law firms, that bar is overkill. Trust conversation goes to Profound when the audience is the Fortune 500, to GenPicked when the audience is a sub-100-employee agency owner.
The attribution reality vendors skip
Both platforms exist because the surface they measure is no longer optional. Per Semrush's AI search traffic study, the average AI search visitor is 4.4 times as valuable as the average organic visit. Per Yotpo's AEO research referencing Seer Interactive's update, brands cited in AI Overviews earn 35% more organic clicks and 91% more paid clicks than uncited competitors. The Google CTR drop is the other side of the same coin — Search Engine Land's coverage of Seer's data documents a 61% drop in organic CTR and a 68% drop in paid CTR when AI Overviews appear, on queries that now trigger on roughly 50% of tracked terms with a +58% YoY increase per BrightEdge's 12-month study.
The attribution gap matters because per Ahrefs' 863,000-keyword study, only 38% of pages cited in Google AI Overviews rank in the organic top 10 for the same query — down from 76% seven months earlier. Traditional Google ranking and AI visibility are drifting apart. If your client's monthly report is built only on GA4 organic data, you are reporting on a shrinking surface. The AEO score is the new line item, and the question is only which platform's score you can defend in the meeting.
The decision rule, in four lines
Choose Profound if: you are part of an enterprise team rather than an agency, you have a procurement department that handles custom contracts, you need 8+ engines tracked including Grok, DeepSeek, Meta AI, or Amazon Rufus, or you need HIPAA-attested AEO infrastructure for healthcare, finance, or regulated clients.
Choose GenPicked if: you are a sub-100-employee agency that needs a published price, multi-brand workspaces, white-label PDFs at the Growth tier, and an ACS scoring formula you can explain to your client without a sales engineer in the room.
Hybrid stack: some agencies will run GenPicked as the primary multi-brand reporting layer and bring Profound in on a single named enterprise client whose engine mix or compliance footprint demands the extra surface. That is a defensible architecture; it is also more procurement than most agencies need.
Validate first: if you have one client and want to test whether AEO is real before committing budget, run the free public ACS score on their domain. If the number lands below 20, you have an “invisible” client and a clear retainer pitch waiting. If it lands above 40, you have a “competitive” client and a defense pitch. Either way, you walk into Wednesday's meeting with data and a band classification instead of a sales-engineer scheduling email. Start there: start a 14-day Growth trial and run the score against your top three retainers before the next QBR.