What Blog Posts Get Cited in ChatGPT? A 2026 AEO Playbook

ChatGPT's fan-out behavior favors commercial decision-support content over pure explainers. Three blog archetypes most likely to be cited — and what we still don't know.

Haley C.R. Button-Smith - Content Creator / Digital Marketing Specialist at Button Block
Haley C.R. Button-Smith

Content Creator / Digital Marketing Specialist

Published: May 11, 202613 min read
Editorial workspace with a laptop showing a blog post outline, notebooks, and reference books arranged on a wooden desk under warm window light — the kind of setup a small-business owner uses to plan AEO content for ChatGPT citations.

Key Takeaways

  • Commercial-intent content gets cited far more often than pure explainers. A 2026 Search Engine Land study of 90 prompts found commercial prompts triggered ChatGPT's fan-out behavior 78.3% of the time vs. 3.1% for informational prompts.
  • Three blog archetypes punch above their weight in citations: definitional explainers with embedded decision criteria, head-to-head comparison pages, and structured methodology checklists.
  • What we don't know matters as much as what we do. OpenAI doesn't publish ChatGPT's citation ranker, so the observed patterns are signals, not laws.
  • The Fort Wayne implication: a single, well-structured FAQ rewrite on your service page can do more for AI visibility than five generic blog posts.
  • Sourcing and structure beat length. Long posts without clear decision-support content tend to get skimmed and skipped.

What is ChatGPT actually doing when it cites a blog post?

When a customer asks ChatGPT, “Which HVAC company in Fort Wayne handles 24-hour emergency service?” the model rarely runs one search. According to Search Engine Land's April 2026 fan-out study by André Pitì and Ben Tannenbaum, modern generative search systems expand requests into “multiple background searches, then retrieve and synthesize across those subtopics.” That expansion — called query fan-out — is the part of the pipeline where your blog post either gets pulled into the answer or quietly skipped.

The Pitì–Tannenbaum sample was small but instructive: 90 prompts across three industries, 20 of which triggered fan-out expansion. Eighteen of those 20 were commercial-intent prompts. Two were informational. The 20 prompts generated 42 total sub-queries, of which 39 were commercial. That's the headline data point worth holding onto: commercial prompts triggered fan-out 78.3% of the time; informational prompts triggered it 3.1% of the time. The authors are explicit that the results are “directional, not universal” and skewed by the prompt mix they chose — but the asymmetry is large enough to take seriously.

This sits next to a finding from Moz that Crystal Ortiz cites in her review of long-term SEO in AI search: “only 12% of AI Mode citations mirror the URLs in organic results.” So the AI is not just rewarding the same pages Google ranks. Something different is going on with what AI engines pull when they need to compose an answer. Our companion post on why ChatGPT citations favor ranking and precision walks through the citation-mechanics side of this; in this piece we focus on the content side — which blog post structures actually show up in fan-out queries, and how a small business can write for them without inventing a 5,000-word “ultimate guide.”

If you're a Fort Wayne dental office, a small SaaS, or a service business writing one or two posts a month, the practical question becomes: what should those posts look like?

Abstract visualization of a search query branching into multiple connected sub-queries on a dark dashboard interface, representing ChatGPT's query fan-out behavior for AEO content.

Why query fan-out changes what you should write

The Pitì–Tannenbaum piece makes a quiet but important argument: ChatGPT's fan-out behavior is biased toward “assisted decision support” — the moment in a buyer's journey where they're comparing, evaluating, or shortlisting. Their recommended formats are explicit: best-of and shortlist pages, comparison pages, “which tool should I choose” pages, feature-led category explainers, alternatives pages, evaluation FAQs, and recommendation-oriented paragraphs embedded inside broader educational content.

The pivot is subtle. They're not telling content teams to stop writing educational content. They're saying that purely educational content — the “what is X” piece that explains a category without naming products, tradeoffs, features, use cases, pricing logic, or selection criteria — is much less likely to align with the fan-out paths the model takes. Their phrase: “Your content model shouldn't be just ToFU or BoFU, but ToFU with commercial bridges.”

That's a useful frame for small businesses, because most of the AI-visibility coaching of the last 18 months has nudged people in the opposite direction — toward longer, more comprehensive explainer posts. The data here suggests that a 1,400-word explainer that names three competitors, lists their tradeoffs, and explicitly recommends a choice is doing more AEO work than a 4,000-word “ultimate guide to X” that never names anything specific.

The same direction shows up in Donna Rougeau's analysis of answer equity. Rougeau cites Seer Interactive data showing that paid CTR on informational queries has dropped 68% when Google's AI Overviews are present, and SISTRIX research showing Position 1 organic CTR falls from 27% to 11% — a 59% decline — when an AI Overview appears. The point isn't that informational content is worthless; it's that the click economy around informational content has collapsed. The value is shifting to content that gets cited inside the AI's answer rather than ranked under it.

The three blog archetypes most likely to get cited

The Pitì–Tannenbaum format list breaks down into three recurring patterns we've seen drive citations across our own client work. These aren't OpenAI-blessed categories — they're a Button Block synthesis. Name them out loud, build a few of each, and you'll have the bones of an AEO content program.

1. The Definition Post (with decision criteria baked in)

A definition post explains a category — “What is a managed Wi-Fi service?” — but does the thing the Pitì–Tannenbaum study calls a “commercial bridge.” It names the use cases, the typical tradeoffs, the pricing logic, and the kinds of buyers who pick each option.

The structural cue: short definitional answer (40–60 words) at the top, then “When to choose option A vs. option B” subsections, then a tradeoff table. The piece reads like an educational article and behaves like a buying guide.

We've seen these posts pull citations when they include an explicit “who this is for” paragraph near the top — the kind of canonical positioning Jes Scholz writes about in her piece on how AI models “understand” your brand. Her formula — [Brand] is a [market category] for [audience] who need [use case], differentiated by [proof] — is a single sentence that AI retrievers can lift cleanly.

2. The Comparison Post (head-to-head, with a real recommendation)

The comparison post — “Service A vs. Service B” or “Five alternatives to X” — is the format most directly aligned with the fan-out behavior the SEL study observed. The Pitì–Tannenbaum prompts that triggered fan-out included “Suggest the best accounting software for small business and explain why,” “What are the top AI document management systems for lawyers?”, and “What are the best products for skin care?” All three are evaluative.

What's worth noticing: the fan-out responses didn't reward content that hedged. They expanded into queries that asked for specific products, features, and reasons to choose. A “five accounting tools” post that lists each tool with a feature matrix and a clear “best for” call — FreshBooks is best for solo service providers; Xero is best for inventory-heavy retail — is a citation magnet. A post that simply lists five tools without commitment is not.

3. The Methodology Checklist (with named steps and gotchas)

The third archetype is the one most small businesses underuse: a how-we-do-it checklist, with named steps, time estimates, and the specific failure modes you've seen. It's not a generic “10 tips for X.” It's a structured walkthrough — “Our 6-step process for diagnosing a slow WordPress site” — that includes the tools, the order, and what goes wrong.

These posts get cited because they're information gain content. They contain something the rest of the internet doesn't have: your specific experience encoded as a procedure. Our information gain audits walkthrough goes deeper on why proprietary process content earns citations more reliably than syntheses of public knowledge.

The structural cue: H3 per step, a “common mistake” callout under each step, and a short tools-and-time block at the top.

Three open notebooks side by side on a marble surface, each showing a different blog post structure — definition, comparison, and methodology checklist — illustrating the three archetypes for AEO citations.

What we don't know about ChatGPT's citation logic

This section is the one most AEO content skips. It shouldn't.

OpenAI does not publish ChatGPT's citation ranker. We do not have a documented list of weights, recency decay curves, source-trust scores, or query-class-to-format mappings. The Pitì–Tannenbaum study itself is explicit that its findings are “directional, not universal.” So here are three specific things we don't know, and why each one matters when you're deciding what to publish:

  • We don't know the weights. The model may favor commercial-intent content, but we have no public data on how much weight is placed on recency vs. domain authority vs. content structure vs. citation quality. A small business shouldn't assume that more commercial content equals more citations. The relationship is probably non-linear.
  • We don't know how recency decays. The Pitì–Tannenbaum data was collected at a single point in time. Whether a post from 2024 has the same citation probability as a post from 2026, controlling for everything else, is unknown publicly. Some model providers signal preference for fresh content, but ChatGPT's exact behavior here isn't documented.
  • We don't know how source trust is scored. ChatGPT cites Reddit, news outlets, marketing blogs, and product documentation. The thresholds it uses to decide a small-business blog is citation-worthy vs. citation-skipped aren't published. The Search Engine Land semantic programmatic SEO blueprint by Lisane Andrade argues that internal “semantic mesh” linking and entity consistency probably help — but “probably helps” is not the same as “this is the rule.”

The honest version of an AEO strategy says: we know commercial-intent, structured content with clear decision criteria gets cited at materially higher rates in observed studies. We do not know the model's exact ranker. We design content to maximize the observed pattern and accept that some of what works is unmeasurable from the outside.

How to structure a post to maximize citation probability

The structural moves below are derived from the SEL study and from our own work on Northeast Indiana client sites. None of them require longer posts; most of them require better-structured posts.

MoveWhat it looks likeWhy it matters
Short canonical answer at the top40–60 word direct answer under the H1Lets the model lift one paragraph cleanly into a fan-out result
Question-format H2s“How does X compare to Y?” instead of “X vs. Y benefits”Aligns the page to the way prompts are phrased
Named entities in the bodySpecific tools, vendors, neighborhoods, certificationsBrand and entity mentions are what the retrieval layer pattern-matches against
Comparison tables with real differencesFeature, price, fit-for paragraphThe fan-out study explicitly lists “feature-led category explainers” as a strong format
Explicit “best for” recommendations“Best for solo dentists with one location”Pure listicles without recommendations are less likely to align with the model's evaluative fan-out paths
One proprietary number per sectionAn average, a count, a duration from your own workInformation gain; differentiates your post from synthesis content
FAQ block with 5–7 questionsQuestion H3 + 2–4 sentence answerFAQPage schema is still the highest-ROI structured data block for AEO

We cover the broader question of whether these structural moves should be done at scale in our piece on Google's Agentic Engine Optimization playbook. The short answer: structure helps machines parse, but quality content is still what gets pulled.

A Fort Wayne small-business example: rewriting one FAQ page

The fastest AEO win for most Northeast Indiana SMBs is not “publish 20 new posts.” It's “rewrite one existing FAQ page into the high-citation archetype.” Here's what that looks like, using a Fort Wayne HVAC service business as the example.

The original page is probably titled something like “HVAC Service FAQ.” It has eight Q&As covering basics: how often to service the system, what an AC tune-up costs, what to do when the furnace won't start. The questions are good, but the answers are 80–120 words of generic guidance that could be from any HVAC site in any city.

The high-citation rewrite keeps the same eight questions and changes three things. First, every answer opens with a 40–60 word direct response and then expands. Second, every applicable answer names a specific Fort Wayne or Northeast Indiana variable — what a 95% AFUE furnace costs to run during a typical DeKalb County February, what utility rebates Indiana Michigan Power offers in 2026, why same-day service in Allen County typically runs higher than scheduled visits. Third, the page closes with a “What to expect on a first service call” methodology section: six numbered steps, a typical time range, and the most common surprise we find on older homes.

That rewrite takes a Saturday afternoon. It costs nothing. It adds the entity mentions, the decision criteria, and the proprietary methodology that the fan-out study and the brand clarity research both suggest matter. And it puts every piece of content the page already has into a structure the AI retrievers can use. We've seen the same pattern work for dental practices rewriting an insurance-compatibility page, and for solo attorneys rewriting a free-consultation explainer.

Small business office in Northeast Indiana with a desk, a laptop displaying a blank FAQ page mockup, and a printed page being marked up with a red pen, depicting an FAQ rewrite for AEO citations.

Where most small-business blog programs go wrong

A few patterns we see when we audit existing client blogs against the citation-friendly archetypes:

Too much pure ToFU, not enough commercial bridges. A Fort Wayne plumbing client had 31 blog posts and almost none of them named a single product, vendor, or competitor. Every post was a generic “what to do when…” explainer. That's the exact content profile the Pitì–Tannenbaum data suggests doesn't get fanned out. The fix wasn't more posts. It was rewriting four of the existing posts to include named tradeoffs and recommendations.

Long posts without structure. A 4,200-word “ultimate guide” with no FAQ block, no comparison table, no canonical 40–60 word definition, and no entity mentions is harder for AI retrievers to extract than a 1,400-word post with all four of those elements. Length is not the win. Structure is.

One-off content without internal links. The SEL semantic programmatic SEO blueprint makes the point that “orphan pages” — pages without semantic internal links to and from related content — underperform. A blog program of 25 unconnected posts will get fewer citations than a program of 12 posts with a deliberate semantic mesh between them.

Hyperbole that signals untrustworthiness. Posts that use words like “revolutionary,” “game-changing,” or “guaranteed” tend to be syntheses of public content with a marketing veneer. The retrievers don't reward that, and neither do the humans who land on the page. The SEL piece on long-term SEO in AI search frames this well: brands building real authority do it through measured, fact-anchored writing, not adjectives.

No source attribution. A post that makes claims without linking to where the claims come from is a citation dead-end. AI retrievers are pattern-matching against sources; if your post is itself well-sourced, it becomes a more useful citation node.

Wall calendar with three highlighted months showing a 90-day content plan, alongside a tablet displaying a content calendar grid, illustrating the small-business AEO publishing schedule.

How does this fit an actual small-business content calendar?

You don't need to publish 20 posts to test this. A realistic 90-day plan for a Fort Wayne small business with one part-time content owner looks like:

  • Month 1: Rewrite the homepage's “Services” copy into a canonical positioning paragraph (Scholz's formula). Rewrite one existing FAQ page into the high-citation archetype.
  • Month 2: Publish one comparison post (“X vs. Y” or “Five [category] options in Fort Wayne — which to pick”). Build the comparison table; include real “best for” recommendations.
  • Month 3: Publish one methodology checklist (“How we [do the thing] — our 6-step process”) with proprietary numbers from your own work. Add an FAQ block to your two highest-traffic existing pages.

That's three published changes per month over 90 days — four pieces of new content total, all built for citation rather than for clicks. It's enough to test whether the structural moves work for your category. Track citations using whatever tools you already have (we cover what to measure in our piece on content marketing ROI for small businesses, and you can monitor crawled and indexed pages through Google Search Console).

In our experience, the structural rewrites move the needle faster than net-new content for SMBs with under 50 indexed pages. The reason is mechanical: every existing page already has internal links, history, and a small amount of accumulated trust signal. Restructuring uses what's already there.

Ready to put this into practice?

Our Answer Engine Optimization service is built around the kind of structural rewrites described in this post. We start with a citation audit of your existing pages, identify the two or three highest-leverage rewrites, and structure them against the archetypes above. We don't promise specific citation counts — no one can, since the underlying ranker isn't public. What we can show is the structural before/after, and the entity and schema work that gives the page its best shot at being pulled into AI answers.

Frequently Asked Questions

Based on the April 2026 Search Engine Land fan-out study, commercial-intent formats — comparison pages, "which tool should I pick" guides, alternatives pages, and feature-led category explainers — triggered ChatGPT's multi-query expansion 78.3% of the time vs. 3.1% for pure informational content. So comparison and decision-support posts have the strongest observed citation signal, though the sample size in the study was small (90 prompts) and the authors note the findings are directional.
Yes, but with commercial bridges. A definition post that names tradeoffs, use cases, and selection criteria — and ideally a clear "best for" recommendation — is more likely to align with how ChatGPT fans out queries than a pure category explainer that doesn't commit to any specific recommendation. The Pitì–Tannenbaum study calls this "ToFU with commercial bridges."
There's no published optimal length. Our companion piece on ChatGPT citations and ranking precision walks through the data suggesting that mid-length, tightly focused content gets cited more reliably than 5,000-word "ultimate guides" that try to cover everything. We typically target 1,500–3,000 words for citation-focused posts, but structure and clear decision criteria matter more than word count.
ChatGPT doesn't publish its citation logic, so no schema type is "the answer." That said, FAQPage schema, Article schema, and Speakable schema are the three structured data types most aligned with how AI retrievers parse content. Adding a well-structured FAQ block with 5–7 questions is the highest-ROI structured data move for most small business pages.
There's no first-party ChatGPT analytics for citations yet. The practical workaround is manual: build a list of 10–20 likely customer prompts for your Fort Wayne or Northeast Indiana service category, run them through ChatGPT every 30–60 days, and record which of your URLs appear as citations. Tools like Profound, Manus AI, and a handful of others have started tracking this, but the visibility is partial.
For small businesses with under 50 indexed pages, structural rewrites of existing posts usually move the needle faster than new content. The existing pages already have internal links, indexation history, and accumulated trust signal. Adding canonical answers, named entities, comparison tables, and FAQ blocks to a small set of high-value existing pages is the highest-leverage AEO move we see in 90-day engagements.
No — and this is one of the more important findings. Search Engine Land's long-term SEO piece cites Moz research that "only 12% of AI Mode citations mirror the URLs in organic results." Strong Google rankings help, but they don't guarantee AI citations. Optimizing for AEO requires its own structural moves on top of solid traditional SEO.
Which blog post format gets cited most often in ChatGPT?
Based on the April 2026 Search Engine Land fan-out study, commercial-intent formats — comparison pages, "which tool should I pick" guides, alternatives pages, and feature-led category explainers — triggered ChatGPT's multi-query expansion 78.3% of the time vs. 3.1% for pure informational content. So comparison and decision-support posts have the strongest observed citation signal, though the sample size in the study was small (90 prompts) and the authors note the findings are directional.
Should small businesses still write educational "what is X" blog posts?
Yes, but with commercial bridges. A definition post that names tradeoffs, use cases, and selection criteria — and ideally a clear "best for" recommendation — is more likely to align with how ChatGPT fans out queries than a pure category explainer that doesn't commit to any specific recommendation. The Pitì–Tannenbaum study calls this "ToFU with commercial bridges."
How long should a blog post be to get cited in AI search?
There's no published optimal length. Our companion piece on ChatGPT citations and ranking precision walks through the data suggesting that mid-length, tightly focused content gets cited more reliably than 5,000-word "ultimate guides" that try to cover everything. We typically target 1,500–3,000 words for citation-focused posts, but structure and clear decision criteria matter more than word count.
What schema markup helps with ChatGPT citations?
ChatGPT doesn't publish its citation logic, so no schema type is "the answer." That said, FAQPage schema, Article schema, and Speakable schema are the three structured data types most aligned with how AI retrievers parse content. Adding a well-structured FAQ block with 5–7 questions is the highest-ROI structured data move for most small business pages.
How can a Fort Wayne small business tell if its blog posts are being cited in ChatGPT?
There's no first-party ChatGPT analytics for citations yet. The practical workaround is manual: build a list of 10–20 likely customer prompts for your Fort Wayne or Northeast Indiana service category, run them through ChatGPT every 30–60 days, and record which of your URLs appear as citations. Tools like Profound, Manus AI, and a handful of others have started tracking this, but the visibility is partial.
Is it worth rewriting old blog posts for AEO, or should I focus on new content?
For small businesses with under 50 indexed pages, structural rewrites of existing posts usually move the needle faster than new content. The existing pages already have internal links, indexation history, and accumulated trust signal. Adding canonical answers, named entities, comparison tables, and FAQ blocks to a small set of high-value existing pages is the highest-leverage AEO move we see in 90-day engagements.
Do AI search engines and traditional Google rank the same content?
No — and this is one of the more important findings. Search Engine Land's long-term SEO piece cites Moz research that "only 12% of AI Mode citations mirror the URLs in organic results." Strong Google rankings help, but they don't guarantee AI citations. Optimizing for AEO requires its own structural moves on top of solid traditional SEO.

Sources & Further Reading