Building Your AEO Portfolio, A Step-by-Step Guide
In this guide, you will learn: How to assemble a portfolio that proves you can do the AEO Strategist job in practice, two or three sample audits, a methodology writeup, and a public teardown of an AEO tool's claims. Each piece has a template, a target length, and a finished example to model against. Budget 20-30 hours total.
Where you are in the curriculum
This is Lesson 8.3, the guide in Module 8. Lessons 8.1 and 8.2 were foundational: what the role is, what skills it needs. This lesson is the doing lesson. By the end, you will have a portfolio folder you can link to from your LinkedIn, your resume, or your cold emails.
Why the portfolio matters more than the resume
AEO has no established credentialing path. There are no university degrees in it, and the certifications that exist (most vendor-issued) prove little about whether you can actually do the work. Hiring managers in adjacent fields have learned this the hard way with SEO, where a "certified" candidate often cannot run an audit, and they are applying the same skepticism to AEO from the start.
What they look at instead is work product. A well-structured audit tells them in five minutes what a resume cannot tell them in an hour: can you design a measurement, execute it cleanly, read the results, and explain them to a non-technical stakeholder? That is the entire job.
Your portfolio is the answer to that question.
Claim-evidence block Hiring managers in technical marketing specializations discount credentials and read work product. HBR (2025) documents that executives are being asked AEO questions they cannot answer, a governance gap more than a technical gap, which means hiring managers are looking for practitioners who can demonstrate competence, not certification. Across SEO, analytics, and growth hiring over the last decade, role-specific portfolios, case studies, audits, teardowns, have been consistently reported as the highest-signal artifact for hiring decisions. AEO hiring is following the same pattern. The practical implication: the candidate who links to two real audits will be read ahead of one who lists a dozen certifications. (the brand intelligence gap)
What goes in the portfolio
Four pieces. You can build more, but you need at least these four to be taken seriously.
| Piece | Format | Target length | Purpose |
|---|---|---|---|
| Sample audit A | Written report + spreadsheet | 8-12 pages + 1 sheet | Proves you can run the audit end-to-end |
| Sample audit B | Written report + spreadsheet | 8-12 pages + 1 sheet | Proves audit A was not a one-off |
| Methodology writeup | Standalone essay | 1,500-2,500 words | Proves you can explain your method |
| Public teardown | Blog post / LinkedIn article | 1,000-1,500 words | Proves methodology skepticism |
Optional fifth piece: a 3-minute Loom video walking a hiring manager through one of the audits. Not required, but the strategists who have one get interview response rates roughly twice those who do not.
The portfolio folder, file structure
Before you write anything, set up the folder. A clean structure is itself a signal.
aeo-portfolio/
README.md ← one-page overview with links
audit-a-<brand>/
audit-report.pdf
audit-spreadsheet.xlsx
raw-outputs/
chatgpt-run-01.txt
claude-run-01.txt
gemini-run-01.txt
perplexity-run-01.txt
...
audit-b-<brand>/
(same structure)
methodology-writeup.pdf
public-teardown-<vendor>.pdf
loom-walkthrough.md ← link to the video, with 3-line summary
Host this on GitHub (public repo), Notion (public page), or a personal site. Do not put it in Google Drive with a share link that expires. Hiring managers want something clickable and permanent.
Piece 1, Sample audit A (the anchor)
This is the piece everyone will look at first. It needs to be tight.
Step 1: Choose a brand
Pick a brand that meets three criteria:
- Mid-size, not enterprise-giant. Big enterprises have too much noise in the data. A mid-sized B2B SaaS company, a DTC brand, or a professional services firm gives cleaner signal.
- You have no relationship with it. You are not being paid. You do not work there. This keeps the audit unambiguously yours to publish.
- It has at least three direct competitors you can name. Comparison is where the audit gets interesting.
Good example categories to pick from: project management tools, CRMs for small businesses, accounting software, marketing analytics platforms, e-commerce platforms.
Step 2: Build the question set
Twelve to twenty prompts, split across three prompt types. The AEO Strategist Starter Kit has the template; this is the condensed version:
- Category-level prompts (the brand is not named): "What are the best [category] tools for [use case]?", at least 5 prompts with varied phrasing.
- Brand-perception prompts (the brand is named, no competitors): "Tell me about [brand]. Who is it for? What are its strengths and weaknesses?", at least 4 prompts.
- Head-to-head prompts (brand vs. competitor): "Compare [brand] to [competitor] for [use case]. Which is better?", at least 4 prompts, rotating which brand appears first.
The rotation matters. Position bias is real, and your audit needs to demonstrate you know how to correct for it.
Step 3: Run the prompts
Four models: ChatGPT (GPT-4o or latest), Claude (Sonnet or Opus, latest), Gemini (latest), and Perplexity (default mode). Three runs per prompt per model. Capture raw outputs in the raw-outputs/ subfolder.
That is 12 prompts × 4 models × 3 runs = 144 captures. Budget about 4-6 hours. Not glamorous. Do it cleanly.
Step 4: Score the outputs
In your spreadsheet, score each capture on:
- Brand named (yes/no). For category prompts only.
- Position of mention. 1 = first named brand, 2 = second, etc.
- Description accuracy. 1-5 rubric: 1 = materially wrong, 5 = crisp and accurate.
- Competitor set. Which competitors did the model name alongside?
- Citation behavior. Did the model cite a source? Which one?
Calculate three summary metrics: - Share of model: across category prompts, in what percentage of outputs was the brand named at all? - Average position: when named, where did it appear in the list? - Perception consistency: in brand-perception prompts, did the three runs describe the brand consistently, or did the description drift?
Step 5: Write the report
Eight to twelve pages, structured like this:
- Executive summary (one page). Top three findings. Written for a CMO.
- Methodology (one page). Prompt count, runs, models, scoring rubric. Include a line on limitations, what this audit can and cannot say.
- Category visibility (two pages). Share of model, average position, named competitor set.
- Brand perception (two pages). How each model describes the brand. Consistency across runs. Accuracy.
- Head-to-head (two pages). How the brand compares to its main competitor in head-to-head prompts.
- Recommendations (one page). Three to five specific, implementable actions. Not vague.
- Appendix (as needed). Spreadsheet link. Raw-output sample.
A good audit report reads like a consulting deliverable. Crisp. Decisive. Honest about limitations.
Step 6: Publish
PDF the report. Put it in audit-a-<brand>/. Link to it from the README. Done.
Piece 2, Sample audit B (the reliability proof)
Audit B has the same structure as audit A. The point is to show that audit A was not a fluke, that you can do the work repeatedly, cleanly, with similar rigor on a different brand in a different category.
Two small changes to make audit B interesting:
- Different category from audit A. If A was B2B SaaS, pick a DTC brand. If A was professional services, pick a software tool.
- Add one methodology refinement. Maybe you add a Latin Square rotation for head-to-head prompts (Module 5). Maybe you add a fifth model (Meta AI, Grok, or a Chinese model like Qwen). Pick one refinement and note it in the methodology section.
That small refinement signals growth. It tells the hiring manager: this person is iterating, not just repeating.
Budget: about 60-70% of the time audit A took. You already have the templates and the rhythm.
Piece 3, The methodology writeup
This is a standalone essay, 1,500 to 2,500 words, that explains how you audit, independent of any specific brand. Structure it in four parts.
Part 1: Your philosophy (300-500 words)
Two or three principles you bring to AEO measurement. Example principles: - Blind prompts before named prompts, always. (blind vs named measurement) - Three runs per prompt minimum, because one-shot outputs are noise. - Disclose limitations explicitly, because construct validity is never assumed.
State them clearly. Explain each one in three or four sentences.
Part 2: The audit template you use (500-800 words)
Walk through the exact audit template you use, the same one you ran in Pieces 1 and 2. Prompt counts, run counts, model selection, scoring rubric, summary metrics. A reader should be able to adopt your template after reading this section.
Part 3: What your template does not measure (300-500 words)
Every methodology has blind spots. Listing yours is the highest-trust move you can make in this writeup. Examples: - Audit is a point-in-time snapshot; it does not capture drift over weeks. - Scoring is rubric-based, not inter-rater reliability tested. - Model selection excludes proprietary enterprise models like Microsoft Copilot.
State these plainly. Do not apologize for them. Just be clear.
Part 4: How you would improve it with more time (300-500 words)
Finish with three things you would add if you had a larger budget or a client engagement. This signals aspiration and honesty about current limitations. Examples: - Inter-rater reliability scoring with a second rater. - Weekly longitudinal tracking for perception drift. - Programmatic prompt rotation via the model APIs rather than the chat interfaces.
Why this writeup matters
A hiring manager who reads your methodology writeup learns more about you than they do from either individual audit. An audit shows what you found. The methodology writeup shows how you think. Senior hires are made on the second.
Claim-evidence block The single most common methodology error in current AEO reporting is the absence of a blind-prompt baseline. Across the 27+ AEO platforms now in market (Ekamoira 2026), a minority publish a blind vs. named methodology disclosure. Sycophancy is documented across major LLMs and traces to RLHF preference optimization (Sharma et al. 2024), which means almost every public "AI Visibility Score" is vulnerable to sycophancy contamination, the model is more likely to name a brand if the prompt already mentioned it. Candidates who document a blind-first methodology in their writeup instantly differentiate. (blind vs named measurement)
Piece 4, The public teardown
The teardown is where you show methodology skepticism in public. Pick one AEO vendor (named and funded, one of the 27+ in the market). Read their methodology page, their public case studies, and any technical posts they have published. Then write a 1,000-1,500 word post evaluating their claims.
Five questions to structure the teardown
- What exactly does their dashboard claim to measure? Quote the vendor's language. Identify the construct (e.g., "AI Visibility Score").
- What would a valid measurement of that construct require? Methodology checklist: blind vs. named? Prompt count? Run count? Model set? Rotation? Confidence intervals?
- What does the vendor disclose about their method? Specifically. With quotes and page references.
- Where is the gap between what is claimed and what is disclosed? This is the core of the teardown. Be specific. Name what is missing.
- What would you want to see added for the number to be trustworthy? Constructive. Not a hit piece.
A critical tone note
A good teardown is rigorous, not adversarial. You are not attacking the vendor. You are reading their work the way a peer reviewer reads a paper. If a vendor has strong methodology, say so. If they have weak methodology, say that too, specifically, with evidence. Rigor earns respect. Hit pieces earn enemies.
Publish the teardown on Medium, LinkedIn, or your personal blog. Link to it from your portfolio README.
Claim-evidence block The AEO vendor market includes more than 27 named platforms, and a minority publish detailed methodology pages. Profound 2026 marks the category's growth-stage capital, and the Conductor 2025 benchmarks report finds enterprise SEO teams now allocate 15-25% of budget to AEO/GEO but cannot articulate what they are buying, methodology opacity is the default. This creates an opening for practitioners: the analyst who publishes one thoughtful teardown per quarter becomes a known reader of the market in under a year. The sample size is small enough that serious work gets noticed. (ai visibility market landscape)
Optional piece, The Loom walkthrough
A 3-minute video where you share your screen, open audit A, and walk a viewer through the three findings that matter most. Do not polish the production. Do shoot it more than once if the first take rambles.
Hiring managers who will not read a 12-page report will watch a 3-minute video. Both audiences matter.
Putting it together, the 30-day plan
Weeks 1-2: audit A (report + spreadsheet). Week 3: audit B (report + spreadsheet), starting from your audit A templates. Week 4: methodology writeup, public teardown, README, optional Loom video.
If you are working on this part-time (evenings and weekends), double the timeline to eight weeks. Do not stretch beyond that. Momentum matters.
What a finished portfolio signals to a hiring manager
When someone lands on your README and clicks through to one of the audits, here is what they learn in fifteen minutes:
- You can design a measurement. (Methodology section reads clean.)
- You can execute it cleanly. (Spreadsheet is well-organized.)
- You can read results. (Executive summary is crisp.)
- You can write for a stakeholder. (Recommendations are specific.)
- You know your own limits. (Limitations disclosed.)
- You think about the field, not just the work in front of you. (Teardown shows methodology skepticism.)
That is the whole job in one folder. You are demonstrably ready.
Try this, right now
Open a new folder called aeo-portfolio/. Create the README, even if it is empty. Pick your brand for audit A. Write its name at the top of the README with a sentence about why you picked it. You have started.
Takeaways
- The portfolio beats the resume in AEO hiring. Four pieces, two audits, one methodology writeup, one public teardown, prove the whole job.
- Structure matters as much as content. A clean folder and a 1-page README signal discipline before anyone reads a word.
- The teardown is the rarest signal and the highest-leverage piece. It demonstrates methodology skepticism, which separates senior from junior practitioners.
What's next
In Lesson 8.4, the final lesson in this course from GenPicked Academy, we look forward. Where is AEO measurement heading over the next three years, what happens when AEO metrics become optimization targets, and what that means for the career you are now starting to build.
About this course
This lesson is part of AEO A to Z, the open course on Answer Engine Optimization published by GenPicked Academy. GenPicked Academy is where practitioners learn to measure AI recommendations with the same rigor a clinical trial demands: blind sampling, balanced question sets, and confidence intervals that hold up.
About the author: Dr. William L. Banks III is the lead researcher at GenPicked Academy and the architect of the three-layer AEO measurement architecture taught in this course. His work on sycophancy, popularity bias, and construct validity in AI search informs every lesson you just read.
See the methods in practice: GenPicked runs monthly brand-intelligence audits using the exact pipeline taught in Module 6. Read the case studies and audit walkthroughs on the GenPicked blog.