
Key Takeaways
- ChatGPT Search uses fan-out queries: one user question becomes multiple sub-queries that the internal
web.runtool fires in parallel before the model selects citations from the union. - Citation concentration is real. Search Engine Land's analysis reports the average number of unique domains cited per response dropped from 19 to 15 after the GPT-5.3 Instant rollout — about a 20% concentration on fewer authoritative sources.
- GPT-5.4 chains 5 to more than 10 rounds of search per response and refines queries based on previous results; GPT-5.3 Instant typically runs 2–3 rounds. Citation patterns differ noticeably across versions.
- Two visibility layers stack: “parametric visibility” (authority baked into training data, like E-E-A-T for LLMs) and “dynamic visibility” (live retrieval results). Both matter and require different moves.
- A small business can approximate the fan-out test in 15 minutes by running variant queries in a logged-out ChatGPT Search session and logging which sub-question each cited URL answers.
What ChatGPT Search is actually doing when you ask a question
ChatGPT Search doesn't actually search the way most small business owners think it does. When you ask it “best HVAC repair in Fort Wayne,” it does not run one query and return one answer. Inside the model, according to Search Engine Land's 2026-05-14 deep-dive into the mechanism, the system fans that single question into anywhere from two to more than ten sub-queries — about reviews, emergency hours, service area, pricing, brand reputation — runs each one through a built-in browsing tool called web.run, and then composes its citations from the union of returned pages. The model picks which sites to cite based on what the sub-queries returned, not on a single SERP.
That mechanism is what decides whether a small Fort Wayne service business shows up in a ChatGPT answer. If your page only answers one of the six sub-queries, you're competing against the broader web for one slot. If your page answers four of them, the model is much more likely to surface you. That insight changes how a small business should think about AI visibility — and it's not in OpenAI's official documentation in any clear form. The Search Engine Land piece reverse-engineered it by inspecting the system's tool calls and tracking citation patterns across model versions.
This piece walks through what fan-out queries are, how web.run selects sources, what the data says is correlated with being cited, and a 15-minute manual test any small business can run on its own. We'll treat the mechanism honestly: it's one publisher's investigation of OpenAI's behavior, not OpenAI's official spec, and ChatGPT's exact decision logic isn't fully documented. The patterns are real, but the implementation details shift with each model release.
What is a fan-out query, and why does ChatGPT use it instead of running one search?
A fan-out query is the AI-search equivalent of breaking a homework question into smaller questions before answering. When a user asks ChatGPT “best HVAC repair Fort Wayne,” the model — per Search Engine Land's reverse-engineering of ChatGPT's tool calls — doesn't fire the literal string into a search engine. It generates several adjacent sub-queries the user implicitly cares about: reviews and ratings, emergency-call availability, pricing transparency, service-area coverage, brand recognition. Each sub-query goes through web.run, which is OpenAI's internal browsing tool. The results come back as a union of pages, and the model selects citations from across that union to compose the answer. The same upstream-retrieval issue maps onto Search Engine Land's 10-gate AI search pipeline, which identifies the discrete points where a page can drop out of an AI answer — fan-out is what happens at the very top of that pipeline.
The Search Engine Land piece documents that web.run's instruction format changed between model versions. Before GPT-5.3, the tool sent “compact text commands separated by pipes” (the example given: fast|query|recency). After 5.3, it sends “structured JSON objects with typed parameters.” The tool now supports 12 operations — search_query, open, find, click, and specialized widgets for sports, finance, and weather — where the older version supported four. That expansion is what enables multi-round fan-out.
The reason this matters for small business AEO is the implication for content strategy. If you've been writing for AI search the way you write for Google — one page, one topic, one keyword — you're optimizing for a single sub-query. The fan-out mechanism rewards pages that credibly answer multiple adjacent sub-questions in one place, because that page is more likely to get selected from the union for multiple citations within the same answer. Our earlier piece on ChatGPT citations favoring ranking and precision over length covered the related precision finding from the AirOps study; this article adds the upstream piece — what queries the model is actually running on the way to selecting citations.
This is also where the Search Engine Land article calls out a real concentration effect. Average unique domains cited per response dropped from 19 to 15 after the GPT-5.3 Instant rollout, with URLs per response also dropping. They named it the “Bigfoot Effect” after a 2012 Google update where a few domains dominated entire result pages. Fewer winners get more share. That's both an opportunity (if you're one of them) and a structural risk (if you're not).

How does web.run actually pick which pages to cite?
This is the part that's not fully documented anywhere, including in OpenAI's ChatGPT browsing FAQ. Search Engine Land's analysis pieces together the mechanism by observing tool calls and citation patterns rather than reading a public spec, so treat the model below as one publisher's reconstruction — useful but not authoritative.
Per the article's framing, citation selection appears to involve three layers stacked on top of each other.
Parametric visibility. The model formulates web queries by targeting sources it already knows from training. Sites that were prominent in OpenAI's training data — major publishers, Wikipedia entries that clear notability criteria for organizations and companies, well-known brands — get baked into the model as “candidate domains” the model is likely to search for. The article calls this “parametric visibility” and treats it as the LLM equivalent of E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness). The blunt implication for a small business: if your brand isn't represented in Wikipedia, in major industry publications, or in cited research, the model may never search for you in the first place. Search Engine Land puts it as: “brands absent from parametric memory won't even be considered as search candidates.”
Dynamic visibility. Once the model has fanned out into sub-queries, it goes to the live web. Search Engine Land's analysis says ChatGPT relies on third-party scraping APIs for initial search results, then sends its own ChatGPT-User crawler (not the OAI-SearchBot crawler used for training-data indexing) to retrieve actual page content from the URLs it wants to cite. OpenAI's official documentation of OAI-SearchBot and ChatGPT-User confirms the two-crawler model but does not document the fan-out behavior or how web.run chooses which URLs to fetch in real time. That crawler-level retrieval is real-time and depends on whether your site is reachable, fast, and renders meaningful content without JavaScript. Our LLMs.txt and AI discoverability post covers the practical crawler-allowing setup; for the technical layer specifically, what matters is that the retrieval bot needs to find content fast or it skips you.
System constraints inside the prompt. The Search Engine Land reverse-engineering reports that the system prompt includes specific rules: Reddit gets exempted from copyright-related word limits, a granular banned-products list exists, a 1-10 “verbosity score” tunes response length dynamically, and advertising policies vary by subscription tier. None of these are publicly documented by OpenAI. They're observed behaviors, which means they could change in any model release. The directional takeaway for AEO is that the model has guardrails layered on top of pure retrieval, and those guardrails can suppress or amplify specific source types.
For small businesses, the actionable layer is the middle one — dynamic visibility. You can influence parametric visibility over years through PR, Wikipedia citations, and industry recognition, but you can't fix it in a sprint. You can fix dynamic visibility this quarter with structured data markup, faster rendering against the Core Web Vitals targets, no-JavaScript fallbacks, and content that answers multiple sub-questions on a single page. The same investments help with Google's agentic engine optimization patterns we covered, so the work compounds.

How do citation patterns differ across GPT-5.3 Instant, 5.4 Thinking, and 5.4 Extended?
This part is one of the more uncomfortable findings in the Search Engine Land analysis: “the same prompt produces different citations across GPT-5.2, 5.3, and 5.4 variants.” A related April 2026 SEL study on how ChatGPT citations reward ranking and precision over length found that selection bias toward concise, high-precision passages is consistent across versions even as the volume and concentration of citations shifts. Citation patterns are model-specific, and most small businesses don't know which model their users are running.
The article reports specific behavioral differences:
| Model | Search rounds per response | Citation pattern | Notes |
|---|---|---|---|
| GPT-5.3 Instant | 2–3 | Fewer, more concentrated | The “Bigfoot Effect” version — cited domains dropped from ~19 to ~15 per response |
| GPT-5.4 Thinking | 5 to >10 | More iterative, refines queries based on prior results | Chains rounds, surfaces longer-tail sources |
| GPT-5.4 Extended | More extensive chaining | Deepest retrieval | Variant-specific behaviors documented |
Search Engine Land also describes a fan-out type that emerged exclusively for product queries: browse_rewritten_queries. When users ask about products, ChatGPT first runs a single query rewrite to build a candidate list, then launches individual shopping fan-outs for each product. For e-commerce small businesses, that's the path that determines whether your product shows up in a ChatGPT shopping answer — and it's a different path than the editorial one. Our piece on ChatGPT shopping and AI e-commerce discovery covers the product-feed implications of this in more depth.
The practical implication: test your AEO performance on multiple model variants, not just one. The Search Engine Land author calls this an LLM version of “the Google Dance” — the same query gives different answers from different model versions, especially around major knowledge-cutoff updates. If you only test on the default model your account is set to, you're seeing one slice of a wider distribution. Our topical authority isn't enough for AI search post covers the broader strategy implication: distinctiveness matters more than coverage, because more rounds of fan-out increase the chance the model surfaces an idiosyncratic, sharply-focused page over a generic comprehensive one.

How do you run a 15-minute fan-out test on your own ChatGPT Search session?
This is the part you can do today, sitting at your desk in Auburn or Fort Wayne, with no tools beyond a logged-out ChatGPT account. The goal is to approximate what sub-queries web.run is firing for a question your customers might ask, and to log which of your pages — if any — get cited as the answer.
Here's the procedure we use with clients:
Step 1: Pick a target query (2 minutes). Choose a question a real customer would type. For a Fort Wayne HVAC business, that might be “best HVAC repair in Fort Wayne” or “who fixes furnaces in Allen County on weekends.” Avoid overly generic queries; AEO performance is sharper on specific questions.
Step 2: Open a logged-out ChatGPT session (2 minutes). Log out, open an incognito window, and go to ChatGPT. Logged-in personalization can bias citations toward sites you've visited before; the logged-out session approximates what a brand-new prospect would see.
Step 3: Run the query, then five variants (5 minutes). Run the literal query. Then run five variants — change the phrasing, add a qualifier, ask the same question in a different tone. Examples for the HVAC case: “I need an HVAC repair near Fort Wayne, who should I call,” “best-rated HVAC repair Fort Wayne IN,” “emergency furnace fix Fort Wayne weekend,” “top HVAC contractors Allen County Indiana,” “who does same-day AC repair in Fort Wayne.” The variants probe the fan-out: similar intent, different surface phrasing.
Step 4: Log the cited URLs and which sub-question each answered (5 minutes). For each query, write down the URLs ChatGPT cited and — this is the key step — note which sub-question that URL was answering. Reviews? Emergency hours? Pricing? Brand reputation? Service area? If the same URL gets cited across multiple variants for different sub-questions, that's a high-value page. If a competitor's URL keeps showing up for sub-questions your site doesn't answer, you have a content gap.
Step 5: Decide one content move (1 minute). Pick the single sub-question your competitors are winning that your site doesn't credibly answer. That's the next content investment. Don't try to fix everything — fan-out visibility compounds slowly, and a single well-targeted page tends to move the needle more than a sprawling rewrite. Our information gain audits for AI citations post covers how to prioritize among multiple sub-question gaps when the audit surfaces several.
The honest caveats. This test is a directional read, not a precise audit. Logged-out variant testing approximates fan-out behavior; it doesn't replicate it exactly. The model version you're testing against can change without notice, and the citations will shift with it. Run the test quarterly, not weekly — the underlying retrieval landscape doesn't change fast enough to justify more frequency, and weekly noise will exhaust you.

How should a Fort Wayne small business actually act on fan-out visibility data?
The Fort Wayne version of this isn't different in kind from the national version — the test method is the same — but the example queries and the competitive set are local. Run the 15-minute test with a Fort Wayne-flavored query like “emergency furnace repair Fort Wayne” or “DeKalb County family dentist accepting new patients.” The fan-out will generate sub-queries the model thinks a Fort Wayne searcher cares about: insurance acceptance for dental, weekend availability for HVAC, payment plans for legal, before-and-after photos for home services.
What we see, working with small businesses in Allen County and DeKalb County, is that the fan-out test surfaces three recurring content gaps:
- No emergency / after-hours page. Service businesses often have one homepage and one services page. Neither directly answers “do you do emergencies?” The fan-out will pull from the homepage for a brand sub-query and from a competitor's emergency page for the emergency sub-query — citation goes to the competitor.
- No service-area page that names neighborhoods. A page titled “Fort Wayne HVAC” loses the fan-out's geographic sub-query against a competitor with a page titled “HVAC repair in Aboite, Waynedale, and Southwest Fort Wayne.” The geographic specificity is parametric — the model recognizes the neighborhood names from training data.
- No pricing transparency page. “How much does X cost in Fort Wayne” is one of the highest-frequency fan-out sub-queries for service businesses. Most service businesses don't publish pricing. The competitor who publishes even a starting-price range wins the citation.
We don't recommend manufacturing answers to all three at once. Pick the sub-query where the gap is biggest — usually the one where every competitor has content and you have none — and ship the page. The AEO services we run at Button Block use this fan-out audit as the first deliverable for new clients, because it produces a ranked list of content investments grounded in what AI search is actually fanning out to find. The answer engine optimization guide is the deeper pillar piece if you want the broader framework before running the audit.

Want a second pair of eyes on your fan-out audit?
Button Block runs this fan-out audit as the first deliverable for new Answer Engine Optimization engagements. We log the sub-queries your competitors are winning, identify the content gaps that show up across model versions, and hand back a ranked list of content moves. No software to install, no long contract — just the audit and the recommendations.
Frequently Asked Questions
- What is a ChatGPT fan-out query?
- A fan-out query is when ChatGPT takes one user question and generates several adjacent sub-queries before searching, then composes its answer from the union of returned pages. Search Engine Land's investigation reports that GPT-5.4 chains 5 to more than 10 rounds of search per response, while GPT-5.3 Instant typically runs 2–3 rounds. The model picks citations from across the sub-queries, not from a single search.
- How is web.run different from OAI-SearchBot?
- web.run is the internal tool ChatGPT uses to browse the live web during a user query. OAI-SearchBot is OpenAI's crawler that indexes the web for training and ranking purposes. According to Search Engine Land's analysis, web.run triggers the ChatGPT-User crawler to fetch page content in real time, while OAI-SearchBot is the longer-running background crawler. Allowing both in your robots.txt is the safer default for AEO.
- Does my page need to be in Wikipedia to be cited in ChatGPT?
- No, but parametric visibility — having your brand recognized inside the model's training data — significantly increases the chances the model will search for you. Search Engine Land's piece describes parametric visibility as the LLM equivalent of E-E-A-T. A Wikipedia entry is one of the strongest parametric signals, but industry-publication coverage and well-cited research also count. For small businesses without Wikipedia notability, the alternative path is sustained, citable coverage in industry publications.
- How often should I re-run the 15-minute fan-out test?
- Quarterly is usually enough. The underlying retrieval landscape and ChatGPT model versions do not change fast enough to justify weekly testing, and the noise from one logged-out session to another will exhaust you. Run a fresh test after any major model release or after you have shipped a significant content change you want to measure.
- Why do different ChatGPT model versions give different citations for the same query?
- Each model version has different search-round behavior, different internal system prompts, and different training-data cutoffs. Search Engine Land reports GPT-5.3 Instant runs fewer rounds and concentrates citations more, while GPT-5.4 variants chain longer and surface more iterative sources. Same prompt, different model, different citations — which is why testing on multiple versions matters for any business that depends on AI visibility.
- What's the single biggest mistake small businesses make with AEO content right now?
- Trying to answer one question per page when fan-out queries reward pages that credibly answer several adjacent questions in one place. A service-business page that covers what you do, where, when, how much, and what reviews say will out-perform five separate pages that each cover one of those things — because the fan-out can cite the same page across multiple sub-queries. The pattern is the opposite of the traditional "one keyword, one page" rule.
- How should a Fort Wayne or Northeast Indiana small business apply the fan-out test?
- Run the same 15-minute audit, but anchor the test queries to your actual local service area — "emergency plumber Fort Wayne," "family dentist Allen County accepting new patients," or "DeKalb County HVAC repair weekend." The fan-out will generate sub-queries about insurance acceptance, neighborhood coverage, weekend availability, and pricing transparency. The most common gap we see in Northeast Indiana service businesses is the absence of a neighborhood-named service-area page.
Sources & Further Reading
- Search Engine Land: searchengineland.com/inside-chatgpt-search-web-run-fan-out-queries-ai-visibility-477339 — Inside ChatGPT Search: web.run, fan-out queries, and the path to AI visibility (2026-05-14).
- Search Engine Land: searchengineland.com/chatgpt-citations-ranking-precision-length-study-474538 — ChatGPT citations favor ranking and precision over length: Study (2026-04-16).
- Search Engine Land: searchengineland.com/10-gate-ai-search-pipeline-find-where-content-fails-476488 — The 10-gate AI search pipeline: Find where your content fails (2026-05-05).
- OpenAI Help Center: help.openai.com/en/articles/8077698-chatgpt-browsing-faq — ChatGPT browsing FAQ.
- OpenAI Platform Docs: platform.openai.com/docs/bots — Overview of OAI-SearchBot and ChatGPT-User crawlers.
- Google Search Central: developers.google.com/search/docs/appearance/structured-data/intro-structured-data — Introduction to structured data markup.
- Google Search Central: developers.google.com/search/docs/appearance/core-web-vitals — Core Web Vitals overview.
- Wikipedia: en.wikipedia.org/wiki/Wikipedia:Notability_(organizations_and_companies) — Notability guideline for organizations and companies.
